Evaluating Student Work Using Models Derived from Those Used in Nationally Administered Examinations

Author(s): 
Joel Phillips
Volume: 
12
Year: 
1998
Material: 
Abstract: 

Each year thousands of students take one of two music examinations administered nationally by the Educational Testing Service. Students who hope to attend graduate school in music take the revised Graduate Record Examination (GRE) in Music and high school students who wish to receive undergraduate credit for the first year of music theory take the Advanced Placement (AP) Examination in Music Theory.For a number of years both of these examinations have incorporated tasks typical of those given in college music theory classrooms. As examples, students are asked to realize a figured bass, take melodic and harmonic dictation, or harmonize a melody. Because of the complexity of these tasks the responses must be judged by human experts. These experts must agree upon the way in which an item will be scored and apply those standards with such consistency that, given a particular student response, all experts who have been so trained should arrive at the same score within a very small margin of error.In this article I will describe and illustrate the types of judgements made in these examinations. Then I shall demonstrate how I have applied models derived from the scoring guides of these examinations to meet my own classroom needs. Because the nature of the feedback on the national examinations differs from that found in the classroom, I will also demonstrate the type of feedback I give students, with particular emphasis on peer evaluation and collaborative learning. Because I have had the privilege and pleasure of training readers for each of these examinations I can offer what I hope to be an interesting perspective to the task.