Tag Archives: assessment

Teacher Evaluations

Lately I’ve been thinking about assessment and how good teaching is measured in higher education. Primary and secondary schools now rely heavily on student success with standardized tests and teacher success in implementing prescribed instruction. While this system is designed to evaluate good teaching by the impact on student learning, the assessment measures would not be appropriate for higher education, which shifts from competency to high order skill objectives like critical thinking and global awareness.

Ever since I entered higher education 15 years ago, first as a student, and now as a professor, I’ve seen teachers evaluated through three main measures: teaching demonstrations, classroom evaluations, and student evaluations. All three of these components scrutinize the teacher, rather than the learner because measuring student learning in higher education is much more subjective, dependent on institutional make up and mission, save some attempts at gathering quantitative data. How can measurements of good teaching be tied to significant data on student learning in higher education?

Teaching demonstrations are the first assessment of collegiate teachers, during the interview process for new positions. The weight placed on the demonstration will vary depending on the type of institution–R1 vs. SLAC–and perhaps to some extent on the discipline (i.e., humanities, STEM, professional degree programs, etc.). Teaching demonstrations are inherently focused on the teacher, even though teaching implies that there is also a learner. Search committees are looking for a fit, someone that brings a breath of fresh air or conforms to departmental culture. Teachers interviewing for positions are measured on performance, not impact.

Classroom evaluations are completed by colleagues and administrators for use in a professor’s promotion and tenure dossier. Like teaching demonstrations, the focus of a classroom evaluation is on the teacher: was the instructor prepared? does the instructor demonstrate competency with topic? did the instructor begin and end class on time? Even questions more sensitive to student learning still focus on the teacher: did the instructor make effective use of technology and media? did the instructor answer student questions clearly and effectively? did the instructor engage all students in the class?

Student evaluations cover many of the same questions as classroom evaluations. And even if a focus on teacher performance is still present, there is also the new angle of student satisfaction. While a liberal education aims to empower students to be critical and reflective, the intent behind students evaluations is not always matched by their use. Some institutions try to mediate the schism by adding questions that invite students to reflect on their performance. But in many cases those questions are absent or not effectively integrated with the assessment of teaching and learning.

So why is teaching evaluated with little consideration for learning? One of the great rewards of being a college professor is having some autonomy over curriculum and instruction. Any measure to assess student learning through standardized testing and an adherence to prepackaged courses would adversely threaten the whole notion of a liberal education. But is there a more effective way to measure good teachers in higher education that places as much emphasis on student learning as teacher performance? I’d be interested in hearing how other institutions have handled evaluations.

Too Many Graded Assignments

Over the past few years I’ve added more and more graded activities to my classes than ever before. In a 2006 section of early music history, there were 8 total graded items on the syllabus (participation, one presentation, two quizzes, two short papers, a midterm exam, and a final exam; the final exam was worth 40% of the final grade). My 2011 section of early music history had 24 graded items (participation, 12 reflections posted to the online discussion forum, 4 quizzes, 2 tests, 2 short papers, a presentation, a midterm exam, and a final exam; the final exam was worth 20% of the final grade). My recent music theory classes had daily assignments and sight singing recordings, my music history classes had regular discussion forum postings and listening quizzes, and my world music classes had frequent vocabulary and map quizzes. My intention to increase the number of graded activities during the course of the semester was understandable: students were engaged with the material on a daily basis; I could monitor whether students were keeping up with the content; and I could assure the grade-obsessed students that their poor performance on one quiz would have minimal impact on their final average.

In light of some recent findings on student learning, such as the Collegiate Learning Assessment, I’ve becoming more critical of this trend. Grades, which we need to remember from time to time, are a score of performance and not always an adequate indicator of learning and the lasting impact of a course. If students can amass 60% of their final grade from low impact/low challenge activities that can be completed the night before, will they be prepared for sustained projects? How can we encourage students to conduct self-guided, regular learning habits when we prompt them with dozens of assignments? Will students be successful after graduation with few high-stake assessment measures?

One approach may be to require regular learning activities, but grade them only sporadically. I’ve applied this approach through surprise (i.e. “pop”) quizzes that permit notes (but not textbooks) or through written reflections, so that students are encouraged to read chapters regularly. Imagine an online discussion forum or quiz for each class meeting that are graded on a few random days during the semester.

Another approach is to tie a variety of learning activities toward one big “performance” in a flipped classroom. The Reacting to the Past (RTTP) gaming model inspires students to self-guide themselves through reading, writing, and problem solving exercises outside of class in order to “win.” Student performance in the classroom is assessed, but the variety of learning measures leading up to the performance are not graded.

A third approach that comes to mind is what I call “participatory homework” in my music theory classes. In addition to a handful of homework assignments that students submit in Finale, daily exercises from the workbook or Moodle are tied to attendance. Students are present in class if they have completed the preliminary work.

A gradual shift toward numerous graded assignments resembles trends in K-12 education, easing the transition to a collegiate learning environment. The intent is good; an aggregate of numerous, low-stake assessments will help more students receive passing grades. But will this lead to success in the future? Life and a competitive global marketplace will have numerous high-stake challenges. Will higher passing and graduation rates attest to the impact of an institution on student learning and post-graduation success? I suppose that depends on who answers.