The Highs and Lows of the Educational Innovator: of cobbles, haircuts, and flipped assessment (Pt. Two)

Chris Froome competing in the Vuelta a Andalucía, February 2015. Image credit: https://www.flickr.com/photos/106253394@N02/16547689506

In Part One of this article, I shared a bruising experience on the road to educational nirvana. I’d introduced, during 2014, a radical change to the assessment of within-semester essays in my unit: I’d decided to flip the assessment and put the students in charge. I implemented summative Peer Review.

Despite the hard work in setting the system up, about a third of the students objected to the system, ultimately flicking mud at the unit altogether. The crown had slipped.

What to do?

Well, as I’d noted last time, since my unit’s fortunes seemed to be mirroring Chris Froome’s Tour De France highs and lows over 2013 and 2014, I realised that despite crashing out in 2014, Chris Froome had healed, re-grouped, and re-committed himself to be on top of the podium again in 2015.

I figured I better do the same!

On balance, I’m convinced the learning benefit of Peer Review is the right one to pursue. There’s gold in Peer Review. In my nearly 10 years of teaching, I haven’t deployed a more promising critical thinking and learning moment for my students.

Summative peer review flips the tables. Students have to come at the work infront of them with the eyes of a domain specialist. They need to thoroughly understand the marking criteria, the principles and ideals of the strong assignment, and above all, they need to find a voice of constructive, encouraging commentary and feedback. All of this pushes the learner into a place of high-level synthesis. The student becomes the educator. This is why I want to see Peer Review thrive.

But after 2014, I felt that I needed to show the next cohort of students that I’d learned too. Peer Review 2.0 had to be a substantially better model than 1.0.

To this end, I’m in the middle of the next iteration: I’ve tweaked the binary rubric; added an automatic faculty assessment step for the high variance peer review cases; committed to a faculty assessment alongside the peers for all of the initial, minor assignments; and implemented a topic screening step to protect against idea stealing in the review step (a minor issue for some high-fliers).

But more importantly, I took the decision to walk the path of honesty with the present cohort. I wrote extensive posts on Moodle to air the main issues in 2014 and what I’d done to address them. In a word, I wanted the system to have maximum credibility, which must include honest appraisal and transparent redress.

Whilst it is too early to tell if all my changes will have patched the problems of 2014, the experience has certainly lead to more reflection on the nature of educational innovation. Let me share some reflections.

First, educational innovation is risky in a way that research innovation is not. What I mean by that is that if I try out something new in my research and it doesn’t work, then I simply write it up in my logbook and move on to the next idea, safe in the knowledge that I won’t need to look down that corridor of knowledge again. I’ve wasted some time and resources, but these impact me only.  But in educational innovation, the testing of the idea has consequences — we bring the new approach into the classroom and students experience our successes and failures first hand. There is no ‘lab’ or ‘logbook’ of educational innovation. It’s all live trials.

Second, because of the amplified risks of educational innovation, we should expect it to be under-provided by the faculty. Just like scientific research, educational innovation can be modeled as taking draws from a distribution of more or less successful projects. For the same standard economic reasons as occurs in research, a risk-averse educator (or faculty) will seek to minimise the down-side risk. In other words, we’ll see less educational innovation than would benefit our students. As just discussed, the problem will be even worse than with scientific research because the down-side risk of educational innovation is magnified.

Third, as with my experiences, educational innovation isn’t a one-shot activity. That is, you can’t try something out in one semester and leave it there. Educational innovation will need refinement of the model over time — and because of the nature of most units, this may mean you only get a feel for the success of a modification at the rate of once a year. That’s a slow learning rate by any measure.

So taking the above together, if we want to encourage educational innovation, then faculty managers and educational leaders need to think about creating an innovative context.

I’ll explore some ideas for creating this context in the next post released April 15th …


Creative Commons License
The Highs and Lows of the Educational Innovator: of cobbles, haircuts, and flipped assessment Part Two by Dr Simon Angus is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.