New Research on College Teaching Evaluations
Greg Mankiw links to this article in the Journal of Polit Economy "Does Professor Quality Matter? Evidence from Random Assignment of Students to Professors" (available here), following are some excerpts:
"In primary and secondary education, measures of teacher quality are often based on contemporaneous student performance on standardized achievement tests. In the post-secondary environment, scores on student evaluations of professors are typically used to measure teaching quality. We possess unique data [10,534 students who attended U.S. Air Force Academy from the fall of 2000 through the spring of 2007] that allow us to construct a third measure of teacher quality that captures student performance differences in mandatory follow-on classes that are part of an established course sequence. We compare metrics that capture these three different notions of instructional quality and present evidence that professors who excel at promoting contemporaneous student achievement teach in ways that improve their student evaluations but harm the follow-on achievement of their students in more advanced classes.
Results show that there are statistically significant and sizable differences in student achievement across introductory course professors in both contemporaneous and follow-on course achievement. However, our results indicate that professors who excel at promoting contemporaneous student achievement, on average, harm the subsequent performance of their students in more advanced classes. Academic rank, teaching experience, and terminal degree status of professors are negatively correlated with contemporaneous value added, but positively correlated with follow-on course value-added. Hence, students of less experienced instructors who do not possess a Ph.D. perform significantly better in the contemporaneous course, but perform worse in the follow-on related curriculum.
That is, students appear to reward higher grades in the introductory course, but punish professors who increase deep learning (introductory course professor value-added in follow-on courses). Since many U.S. colleges and universities use student evaluations as a measurement of teaching quality for academic promotion and tenure decisions, this latter finding draws into question the value and accuracy of this practice. Our findings raise concerns about the use of either contemporaneous value-added or student evaluations as signals of teaching quality."
MP: One interpretation of this might be that students reward "easy" professors with higher teaching evaluations and punish "hard" or "demanding" professors with lower teaching evaluations while taking classes in college. But when they're sitting (and sweatin') for the CPA exam a few years later, their ex-post teaching evaluations would likely be reversed, and they would experience delayed appreciation for their "hard" accounting professors and delayed regret for their "easy" accounting professors.
3 Comments:
I guess that better student evaluations of instructors are given in elective courses. It seems that core requirements are taught by more seasoned instructors because of the academic weight give by the U. The breezy elective is more enjoyable and so gets the better student evaluation then the hard core.
This past semester I found myself wondering just how accurate student evaluations were, when one student in my class was intent on "punishing" a teacher who was in reality, a pretty good math teacher, albeit not as helpful as some of the female teachers. The fact that he was a coach seemed to imply to some students he should not be teaching math when in fact he enjoyed math very much.
So how do you weed out the incompetent turkeys who just can't teach? I always evaluated them low, but now somebody might interpret that score as indicating they are "deep".
Post a Comment
<< Home