The everyday blog of Richard Bartle.
RSS feeds: v0.91; v1.0 (RDF); v2.0; Atom.
Previous entry. Next entry.
6:35pm on Tuesday, 27th November, 2018:
My Easter breaks in recent years have been dominated by my marking of the CE317 assignment that I set in the spring term.
It takes me an average of 30 minutes per student, and it's audio: I look at their assignment and record my thoughts as I read through it. I could mark the assignments much faster if I didn't give this level of feedback, but then the feedback wouldn't be as good (in my view, anyway; I don't know what the students think of it).
Last week, lecturers in the School of Computer Science and Electronic Engineering were all sent an email explaining new procedures governing feedback. Feedback is one area where we are marked down badly in the yearly National Student Survey; this is important because it's used by newspapers to calculate which universities are best, using their own secret formulae (spoiler alert: Oxford and Cambridge will come out top). In order to improve feedback, the new procedures mandate that all our feedback be moderated by the year manager before release to students. I think the idea is that we have to redo it if it's not good enough, although this will of course delay release of the marks and hit us elsewhere in the NSS. Given that 1 in 20 of my own students last year asserted that I had taken longer than 3 weeks to release any feedback when I had verifiably released it all well before that deadline, we can't necessarily trust what students say. Anyway, the point is that someone has to moderate my CE317 feedback to make sure it's up to scratch.
How do you moderate feedback? Well, you have to look at it. How do you moderate feedback that's not written down, though? Well, you have to watch or listen to it. This means that even if a 10% sample of my feedback recordings were audited, it would take the year manager 3 hours or so to listen to the samples. This was deemed too much work for the year manager. I was told I should reduce my audio feedback to two or three minutes at most.
If I did that, well I'd be writing the feedback down and reading it into a microphone; I may as well just send the written feedback. This is therefore what I shall be doing.
As a result of this exercise, then, students won't be getting feedback as good as before, but they will be getting feedback which has been deemed good enough. Put another way, the process of checking that my feedback is of a high enough quality has itself lowered the quality of my feedback.
The worst thing about this is that it may actually increase the school's overall NSS score for feedback. My audio feedback is at the high end relative to what most other members of staff produce, so it shows up average feedback as being, well, average. If I and the other people who spend time to give this level of feedback are reined in, then the average feedback doesn't look far behind. So long as the students have a baseline of dreadful feedback against which to measure relative quality (which we can give them in their second year, as that doesn't count for NSS scores), anything above that looks positively great. This is regardless of whether it is actually great or is merely OK.
As an analogy, if we gave students cans of tomato soup for feedback in the second year and then in the third year we also gave them cans of minestrone, scotch broth and asparagus, they would likely rate the third year feedback more highly. However, if some of us gave them pizza instead, well the students might no longer hold the soup in such high regard. Get rid of the pizza, and the soup will score more highly — even though overall the students are getting better feedback if they occasionally receive pizza.
I'm not entirely sure how a professor of robotics is meant to gauge whether the feedback for an assignment about the Hero's Journey is of acceptable quality or not, but I'm sure the students will benefit in the end.
About this blog.
Copyright © 2018 Richard Bartle (firstname.lastname@example.org).