Earlier in the year, I shared my bruising experiences of introducing ‘flipped assessment’ (summative peer review of essays) into my 2nd year Economics class during semester one, 2014.
The experience caused a lot of soul searching and reflection on the unique nature of educational innovation (as opposed to scientific innovation), and later, musings on what academic circumstanceswould lead to the effective incubation of classroom innovation.
At the time of writing those reflections, I was in the middle of implementing a round of much-needed updates to the peer-review system and I promised to provide a report on the outcomes. Well, here it is.
First, a quick re-cap on what I changed:
- Because some students were unhappy with the lack of faculty involvement in their marking and feedback, I added a layer of faculty marking to all ‘essay outline’ (one page) assignments, such that students received three peer assessments (50% total) together with one faculty assessment (the remainder);
- Further, I developed software that would pre-identify around 15% of students who received the most in-coherent set of assessments (i.e. in the 19 binary rubric assessment assertions) on the major (2,000 word) essay, pre-enrolling them in a faculty assessment layer, again taking up 50% of these student’s marks;
- Next, I added an online Topic ‘opt-in’ stage, so that the peer allocation system would ensure that assessors didn’t mark the same topic as they submitted (thus addressing a particular gripe of the high-fliers who, in 2014, took the rational strategy of submitting vague or incomplete minor assignments so that other students wouldn’t ‘steal’ their ideas for the main essay!);
- Finally, we tweaked the rubric to make it clearer and reduce further any ambiguity.
And importantly, I wrote at-length about these updates in FAQ posts pre-loaded onto forums as the students started the new semester in 2015. I wanted the students to know that I’d learned, listened, and changed how things were done. I told them in lectures that they were part of an innovation cycle, and that I was constantly monitoring all aspects of the system to their benefit. I asked them for their heightened scrutiny and feedback.
I told them, above all, that I wanted the system to deliver educationally. If it didn’t, I said, I’d junk it.
So what were the results?
First, the value of the system — I re-ran a targeted, anonymous, student survey on the peer review system, obtaining nearly 50% response rate. Figure 1 below shows the inter-year comparison on the students’ overall satisfaction with the system.
Figure 1: Inter-year comparison of overall student satisfaction with the peer review system.
As you can see, the key difference is in the tails. The updates appear to have had the biggest impact on reducing the fraction of very unhappy students, whilst increasing the fraction of very happystudents. The right hand side of Figure 1 groups together the ‘Strongly Agree’ and ‘Agree’ into a ‘Positive’ sentiment column, and compares it to a similarly grouped ‘Negative’ column. Iteration 2 of the system sees almost two thirds of students happy with how it was done. About one in five are still not that pleased. This is down from the one in three of 2014, but still, there’s obviously some work to do.
Figure 2: Inter-year comparison of student evaluation of the impact of the peer review system on critical writing skills.
What about critical writing skills — something I was particularly keen to improve using the system? Figure 2 gives a similar analysis to Figure 1, based on the anonymous survey. Here, the shift from Iteration 1 to Iteration 2 has been less about the tails, and more about a general shift up-field: on average, students moved from a negative disposition towards a neutral or positive disposition.
The right hand side of Fig. 2 shows this well — from one in five of 2014 students being ‘negative’ about the benefits of the system to their critical writing skills, the number is now just one in ten; a fraction I’m comfortable with. On the other hand, almost three in five students are now ‘positive’ towards the value that the system brings to their critical writing skills. This is a good number. But again, it could be better.
Taken together, it was encouraging to see that my changes had a measurably positive impact on student perceptions.
Of course, this is not the same as saying that the system has actually improved the student’s writing skills, but presumably we can assume a good degree of correlation.
Second — what about perceptions of the overall quality of the unit? This matters to me since, during 2014, the unit’s standardised evaluations took a big hair-cut. Indeed, the hair-cut was across the board: despite my innovation being about ‘feedback’, and the few normal content updates, every one of the five standard areas in our unit evaluation system (‘learning objectives’, ‘intellectually stimulating’, ‘learning resources’, ‘feedback’, and ‘overall satisfaction’) took a dive of between 0.4 to 0.2 (out of 5).
I’m pleased to report that the evaluations for 2015 bounced back strongly: I received my highest ever ‘overall’ satisfaction rating, with the ‘feedback’ segment leading the way, also at its highest point I’ve ever had (and miles above the faculty average for this dimension). Similarly, across the board, in those seemingly uncorrelated dimensions such as ‘intellectually stimulating’, the responses all went back up to their 2013 high-points.
Oh, and since I’ve previously compared my experiences to Chris Froome’s Tour de France results, it was interesting to note that like my fortunes, Froome bounced back from a poor showing in 2014 to be back on the top step in 2015. Let’s see how long the synchrony lasts!
As my thoughts start to turn towards semester 1, 2016, I’m in a better place than Spring 2014, but there’s still work to do. For one, I’ve committed to keeping the peer review system: the educational benefits now seem to be finding their way to the majority of students; the systems and software I have now allow me to monitor what is happening with far greater detail and intervene as I need to; and, the system is scalable — ensuring a degree of future-proofing as my enrolments likely float north of the 100 or so I have now.
The challenge now is to take another forensic look at the student comments, to seek the further wisdom from my own peers, and to look carefully at any final pressure points in the educational experience from a student perspective.