This article on the Chronicle defends student evaluations as “not worthless.”
But I just want to stop for a second and examine the claim that there is a 0.5 correlation between evaluations scores and student learning, because when I saw that, I was thinking, “I wonder how they measure student learning…”
It turns out, they measure student learning with grades.
Yes, I took a look at a few of those studies linked, and while they do acknowledge the problem of using something like final exam grades to measure student achievement, that’s pretty much what many of them do.
So, there’s a 0.5 correlation between student evaluations of you and how well they did in your class. And this means that evaluations are measuring that you’re a good teacher? Actually, I think the causal arrow goes the wrong way. You’re getting good evaluations because the students got good grades. Maybe you’re an easy grader!
But wait, it gets better. In the blog post linked by the author of the Chronicle piece, there is a lovely chart showing a 0.53 correlation between course averages and evaluations. But it’s a HYPOTHETICAL chart. It’s hypothetical data, based on “what we would expect based on previous studies.” Check it out. Hypothetical data.
I get the point that the blog was trying to make with this hypothetical data scatter plot – that professors with low evaluation scores aren’t necessarily worse, that they don’t necessarily give worse grades (pardon me, they don’t necessarily “fail to incite student learning”). I heartily agree with the notion that ordinal evaluations of professors cannot be compared or used to say whether one professor is better than the other.
But to me, this is exactly the reason why student evaluations are worthless.