Calibrated Peer Review

Peer review is a very popular way for instructors to allow students to rate each others’ work. The students gain some significant benefits by engaging in peer review, such as improving their own ability to teach other students and provide other students with constructive feedback. Peer review is often a very good way to help students develop their higher-order thinking skills, according to Bloom’s taxonomy (analysis, evaluation, and synthesis). However, peer review is only as good as the students’ understanding of the material, even if the instructor provides a rubric against which to measure the student submissions.

On the other hand, calibrated peer review gives the instructor additional input into how each student will measure another student’s submissions. Calibrated Peer Review (or CPR) is a web-based application developed by the University of California, though the process itself does not necessarily have to be done through a web-based application (it’s an idea as much as it is a technology). There is a handy flow chart illustrating the process but it basically breaks down as follows:

  1. Students first write and submit an essay on a topic and in a format specified by the instructor.
  2. Training to evaluate comes next. Students assess three ‘calibration’ submissions against a detailed set of questions that address the criteria on which the assignment is based. Students individually evaluate each of these calibration submissions according to the questions specified by the rubric and then assign a holistic rating out of 10. Feedback at this stage is vital. If the evaluations are poorly done and don’t yet meet the instructor’s expectations, the students get a second try. The quality of the evaluations is taken into account in the next step evaluation of real submissions from other students.
  3. Once the deadline for calibration evaluations is passed, each student is given anonymous submissions by three other students. They use the same rubric to evaluate their peers’ work, this time providing comments to justify their evaluation and rating. Poor calibration performance in 2. decreases the impact of the grades they give to their peers’ work. After they’ve done all three they evaluate their own submission. [From the Overview Page]

CPR is available for purchase from their website for a fairly reasonable price ($5,000 for a Ph.D.-granting institution). If S&T were interested in this technology, we could certainly investigate to see if they have options for a pilot program or something.

Once CPR is available on a campus, the instructors can access either assignments that they have created or search for assignments from CPR’s database. However, since CPR is still a fairly new technology, the selection for a given topic is still somewhat limited. For instance, I was only able to find 3 assignments when searching their database for college freshman/sophomore English literature assignments. A few more (maybe a dozen) can be found for college freshman/sophomore English composition. As more instructors use this technology, the assignment database should grow by leaps and bounds. But one major road block may be the difficulty inherent in creating an assignment. The interface seems user-friendly enough, but it takes a great deal of thought and planning for each assignment that is created. The instructor needs to create well-developed learning objectives and write three sample submissions (one each of high-, medium-, and low-quality). And then the instructor has to create the rubric/questions the students will be using as their measuring stick for evaluating the other students.

For more information about Calibrated Peer Review, visit ELI’s 7 Things You Should Know About Calibrated Peer Review.