The host institution, Griffith University, as part of its university-wide focus on assessment, has made accountability in teaching and learning outcomes a priority. It employs consensus moderation, using exemplars of previous student responses to the same or similar tasks in exercises to enhance a shared understanding of standards, in order to ensure consistency of marking. With reference to the work of Sadler (2009b, 2010a), consensus moderation was the chosen methodology for the AiM project, and has been used in reviewing the alignment in tertiary music programs between assessment and learning, particularly as represented by the Creative and Performing Arts (CAPA) Threshold Learning Outcome statements (TLOs). It was anticipated by the team that this methodology would also assure consistency of interpretation of the TLOs in the specific context of music.
The methodology draws on a current Griffith University project focused on developing consensus moderation approaches and processes (Sadler 2007, 2009a, 2009b, 2010a, 2010b, 2011) to assure standards of student achievement within and between areas of study. In this process, academics consider a range of student responses to a particular assessment task and share their views on the standard of achievement demonstrated by these responses. This produces a common understanding of what standards are represented by the grades and marks awarded in the assessment process. This approach has been adopted at the Queensland Conservatorium Griffith University (QCGU) as part of a Griffith University move towards consensus moderation as a working form. The project has found this approach to be effective in ensuring rigour within music assessment practices. The partner institutions have participated in inter-institutional consensus moderation as a means of working towards the goal of inter-institutional consensus on standards of student achievement. This has provided a means of ensuring comparability of the grades used to measure student achievement, both within and between courses and programs of study, and also between the partner institutions, providing a model for moving towards sector-wide consensus on such matters in a particular domain.
A process was developed to map individual assessment activities in courses to CAPA TLOs, after providing a context by developing a summary of assessment practices in Australian higher music education. A substantial annotated bibliography of assessment literature with a focus on assessment in music literature was also developed. The mapping process required developing consensus about the equivalencies and overlaps between institutional graduate attribute statements that were included in assessment activity data, and CAPA TLOs. This process was extended to include degree program learning outcomes when these emerged as likely to be significant for reporting purposes.
Assessment in music performance courses was of most interest because these are generally the keystone courses in music performance degrees, the courses that are characteristic of the degree programs. The project identified a particularly effective assessment practice in one department of the lead institution that included activities like reflective writing and students’ contributions to their learning process throughout the semester, in addition to the normal recital or performance assessment: this was adapted and adopted across all departments of the Bachelor of Music (BMus) program. The investigation was enriched through focus groups with teachers and students, to gain greater understandings of participants’ perceptions of the purposes and practices of assessment.
Involvement was extended beyond the lead institution and its two institutional partners by staging an international assessment in music symposium that attracted high quality presentations, stimulated new thinking and has resulted in the publication of an edited book on assessment in music, drawing on material presented at the symposium.
Sadler, D. R. (2007). Perils in the meticulous specification of goals and assessment criteria. Assessment in Education: Principles, Policy & Practice, 14(3), 387—392. doi: 10.1080/09695940701592097
Sadler, D. R. (2009a). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179. doi: 10.1080/02602930801956059
Sadler, D. R. (2009b). Moderation, grading and calibration. Retrieved October 21, 2011, from http://www.griffith.edu.au/__data/assets/pdf_file/0017/211940/GPA-Symposium2009-Edited-Keynote-Address-FINAL.pdf
Sadler, D. R. (2010a). Assuring Academic Achievement Standards at Griffith University. Brisbane: Griffith University.
Sadler, D. R. (2010b). Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in Higher Education, 35(5), 535–550. doi: 10.1080/02602930903541015
Sadler, D. R. (2011). Academic Freedom, Achievement Standards and Professional Identity. Quality in higher education, 17(1), 85–100. doi: 10.1080/13538322.2011.554639