Neither education historian Diane Ravitch or Washington Post blogger Valerie Strauss are fans of using value-added measures to evaluate teachers. [Note: value-added is a statistical term describing the measure of a teacher’s impact on their student’s academic growth – see our report for a further explanation.] Both Ravitch and Strauss are particularly upset with the attention given a recent study on value-added measures, which I wrote about earlier this month.
Apparently, Ravitch and Strauss do not believe, as I do, that the results are that big of a deal. They argue that despite the study’s positive results, using value-added measures to evaluate teachers is a bad idea. Their criticisms pretty much capture the general consensus of value-added critics. But many of these criticisms, though well-intended, are based on misunderstandings of value-added measures, especially when used in teacher evaluation formulas.
In the next few posts, I’ll examine the merits of common criticisms of value-added measures that Ravitch, Strauss and others have highlighted, and point out the misconceptions.
Criticism 1: Studies have shown value-added measures to be unreliable, invalid, and unfair.
Response 1: This is an overstatement. Yes, there are several rigorous studies showing that this is the case, but only if you use a single value-added score to evaluate an individual teacher.
Nobody is seriously proposing to use value-added measures this way. There is no teacher evaluation system I am aware of that even proposes using a value-added score for more than 50 percent of a teacher’s total evaluation. At least half of a teacher’s evaluation would be based on qualitative measures such as principal and peer observations – which, by the way, correlate highly with value-added scores. Other systems propose using statistical techniques that make value-added scores more reliable, such as averaging a teacher’s scores over multiple years.
Keep in mind, too, that although value-added measures are not perfect they are better at identifying the true effectiveness of teachers than the teacher evaluation systems in place now as I show in our report Building a Better Evaluation System.
Criticism 2: Teachers would avoid teaching the most challenging students and avoid teaching in the most challenging schools and districts if teachers were evaluated using value-added.
Response 2: Value-added measures were designed specifically to combat this problem. Yes, previous attempts to evaluate teachers using quantitative measures did result in teachers avoiding challenging positions. However, value-added measures more accurately isolate a teacher’s impact on students’ test scores by explicitly taking into consideration students’ prior achievement. This means, for instance, that teachers who teach low-performing students are compared to other teachers of low-performing students. In addition, value-added measures are based on the amount of growth students make in a year – not their overall score at the end of the year, as previous methods did.
Strauss adds that value-added can’t possibly measure a teacher’s true effectiveness, since 22 percent of children are in poverty and that poverty is strongly correlated to student achievement. I guess she is assuming that value-added doesn’t take into account a student’s socioeconomic status, but this is untrue. Value-added measures account for all student characteristics, including poverty level. Strauss is correct there is a strong correlation between poverty and a student’s achievement level – that is, a student’s achievement at one point in time. But there is little correlation between poverty and achievement growth — the change in student achievement over time. And value-added measures are based on achievement growth, not level. It’s this focus on growth that makes value-added measures so valuable – and why you should come back tomorrow to read more answers to the criticisms about value added.
– Jim Hull