Process for sharing standards between institutions
A consensus moderation exercise was held with the AiM project partners from QCGU, University of Newcastle and the University of Tasmania. The intention was to undertake this process as simply as possible. The example files were shared in advance using Dropbox, email was used to inform all the participants about the process we would follow, and the meeting itself was conducted using Skype Premium, which enables video conference calls. All of these aspects of the exercise were completely satisfactory, demonstrating that such activities can be conducted at almost no cost.
The first 10 minutes of two audio recordings of 4th year, second semester, Bachelor of Music exams from the lead institution were considered first for their overall quality rather than highly specified criteria. Feedback was provided based on the most noteworthy aspects of the performances, both strengths and weaknesses, and criteria may be used to categorize aspects of this feedback. This process conforms to Royce Sadler’s (2013) notion of backwards assessment. The design of this exercise was also influenced by the project leader’s experience of assessment exercises in Greece (at the International Society for Music Education World Conference), Finland (at the Pentacon+ Assessment Seminar) and Austria (at the Polifonia Working Group on Assessment and Standards seminar “Enhancing Standards for Assessment through Effective Practice”) that were designed to explore various degrees of criteria specification and enhance assessment practices through sharing understandings of standards.
The project leader chaired proceedings and once video communication was established, each location shared their views on each of the two performances they had assessed first individually, then in discussion with other members of their local group to form a consensus view. The project team felt that this was an effective and enjoyable way to compare evaluations of quality and to share standards that apply to performance, and that the process added a sense of community and clarity. All agreed that this process can work effectively if conducted remotely in this way and that future experiences would be of great benefit in continuing inter-institutional dialogue on performance assessment practices. Not knowing the students being examined was considered to be a particular advantage as participants were not unduly influenced by prior knowledge of the students.
In summary, about 1000 words of feedback were produced for these two extract performances, and the depth and quantity of feedback was greatest when the assessor had specialist knowledge and it was the specialists who drew attention to the subtle technical aspects. However, the general impressions conveyed by the assessors’ feedback was consistent in its appraisal of quality overall. This was also evident in the awarding of provisional grades, with all assessors agreeing within one grade level.
Some observations of the process were shared and it was agreed that at future moderation meetings, participants should be given a copy of the scores and program notes (if these were required in the original assessment) in order to provide a fuller picture of the performance. It was also agreed that participants should listen to the complete examination as some performances improved over time and that stamina was a relevant criterion that could only be assessed in a complete presentation. It was noted that the percentages for standards were different at the University of Tasmania as compared to QCGU and the University of Newcastle, with percentages slightly lower for the same grade.
There was some debate about missing the visual aspect of the presentation and whether examiners hear differently when they see a performance. Some felt gestures, eye contact and overall connection with the audience are important components of the performance. Don Lebler contributed his knowledge about similar exercises, stating that our counterparts in Europe strongly prefer to moderate using a video performance. He also added that the results of this exercise were similar to those shared at the Polifonia conference in Vienna, 2013, in that at the global level there was very little discrepancy between the evaluations of specialists and non- specialists provided that all assessors are musicians, but that any discrepancies were explained by specialists when they shared their views so the assessment exercise became a learning experience for those examining outside their speciality. In this case, participants’ comments, holistic judgments and grades were very closely aligned.
All three project partners have established archives which hold recorded examples of all year levels, all instruments/voice ranges, and a variety of performance styles. The QCGU will maintain a repository for its own recordings as well as those contributed by project partners and other participants. These archives are available to share and could be used for preliminary training for new examiners as well as for inter-institutional moderation activities such as these.