Over the past few years I’ve added more and more graded activities to my classes than ever before. In a 2006 section of early music history, there were 8 total graded items on the syllabus (participation, one presentation, two quizzes, two short papers, a midterm exam, and a final exam; the final exam was worth 40% of the final grade). My 2011 section of early music history had 24 graded items (participation, 12 reflections posted to the online discussion forum, 4 quizzes, 2 tests, 2 short papers, a presentation, a midterm exam, and a final exam; the final exam was worth 20% of the final grade). My recent music theory classes had daily assignments and sight singing recordings, my music history classes had regular discussion forum postings and listening quizzes, and my world music classes had frequent vocabulary and map quizzes. My intention to increase the number of graded activities during the course of the semester was understandable: students were engaged with the material on a daily basis; I could monitor whether students were keeping up with the content; and I could assure the grade-obsessed students that their poor performance on one quiz would have minimal impact on their final average.
In light of some recent findings on student learning, such as the Collegiate Learning Assessment, I’ve becoming more critical of this trend. Grades, which we need to remember from time to time, are a score of performance and not always an adequate indicator of learning and the lasting impact of a course. If students can amass 60% of their final grade from low impact/low challenge activities that can be completed the night before, will they be prepared for sustained projects? How can we encourage students to conduct self-guided, regular learning habits when we prompt them with dozens of assignments? Will students be successful after graduation with few high-stake assessment measures?
One approach may be to require regular learning activities, but grade them only sporadically. I’ve applied this approach through surprise (i.e. “pop”) quizzes that permit notes (but not textbooks) or through written reflections, so that students are encouraged to read chapters regularly. Imagine an online discussion forum or quiz for each class meeting that are graded on a few random days during the semester.
Another approach is to tie a variety of learning activities toward one big “performance” in a flipped classroom. The Reacting to the Past (RTTP) gaming model inspires students to self-guide themselves through reading, writing, and problem solving exercises outside of class in order to “win.” Student performance in the classroom is assessed, but the variety of learning measures leading up to the performance are not graded.
A third approach that comes to mind is what I call “participatory homework” in my music theory classes. In addition to a handful of homework assignments that students submit in Finale, daily exercises from the workbook or Moodle are tied to attendance. Students are present in class if they have completed the preliminary work.
A gradual shift toward numerous graded assignments resembles trends in K-12 education, easing the transition to a collegiate learning environment. The intent is good; an aggregate of numerous, low-stake assessments will help more students receive passing grades. But will this lead to success in the future? Life and a competitive global marketplace will have numerous high-stake challenges. Will higher passing and graduation rates attest to the impact of an institution on student learning and post-graduation success? I suppose that depends on who answers.
One of the most commonly used components of online course management sites is the discussion forum. Many instructors require students to post reflections on assigned readings with the ultimate goal that an intellectual dialogue will materialize in the forum. I’ve experimented with this approach for several years now and have developed a 5-point grading rubric that has yielded more insightful student writing and has improved assessment efficiency:
A “5” Response follows directions and is well organized, clear in prose, and original in thought. The submission is highly polished and free of grammatical and spelling errors. The author references supporting resources properly, makes clever use of analysis and course terminology, and demonstrates a high level of critical thinking and persuasive writing. Excellent.
A “4” Response follows directions, presents good organization and clear prose, and provides some original ideas. There may be a misspelling or grammatical error, but the submission is acceptable in writing style. The author refers to appropriate resources and provides some discussion of the piece, event, or essay. The author may occasionally get off topic and include some ideas that do not strengthen the submission. An original thesis is present, but the argument may be lost at times. Good.
A “3” Response has some recognizable organization and readable prose, but summarizes rather than argues. There are large block quotes or paraphrased passages from the assigned reading that are not effectively integrated into the argument. There are frequent misspelled words and grammatical errors and the formatting does not always follow directions. A musical piece is discussed, but not tied into the argument, and there is little evidence of critical thinking. Average.
A “2” Response rarely follows directions. The topic is related to the assignment but there is no argument that responds to the prompt. There is no reference to the provided resources or poor use of them. The submission is rife with grammatical and spelling errors. It appears that the student has cut and pasted information from various sources and strung the ideas together into an incoherent mess. Components are missing, such as a discussion of the required piece, event, or essay. Poor.
A “1” Response is incomplete. The word count may be well under the requirement and the author has clearly not read the directions and taken interest in the assignment. The writing style is incomprehensible at times and the focus is unclear. The student has submitted a related, but off topic writing sample for this assignment. Failing.
A “0” Response is absent or plagiarized.
For this system to work, students must have a clear understanding of what “critical thinking” means and be able to recognize the difference between quality and poor college writing. In my experience, students will submit their best efforts once they have actively explored a range of samples I provide them. I give the students 2 or 3 anonymous posts from previous semesters along with the rubrics. Groups evaluate the examples and assign grades using the 5-point scale. I ask each group to report their grades and explain why one submission demonstrates critical thinking and another does not. The concensus is usually that critical thinking requires the author to present an original idea and support it with examples from both within and outside of the assigned reading. The rubrics are helpful, but the followup assessment and discussion are what really encourage the best student writing for the semester.
Online course management discussion forums are a useful tool for engaging students outside of class with a discussion of major course topics. However, it is not enough to grade students on their “participation.” The public forum is a fruitful opportunity to push students to put their best work forward and to adopt critical thinking habits for the semester. The 5-point rubric offers students clear expectations for quality work and gives instructors an efficient evaluation tool.