On Tuesday, the Gates Foundation released its third and final report on how (and if) teacher effectiveness can be quantitatively evaluated. Appropriately titled, Measures of Effective Teaching or MET, the findings were hardly earth-shattering but noteworthy nonetheless. Why?
The sheer size of the project— it spanned three years, cost $45 million, studied 3,000 teachers from eight districts across seven states and involved numerous universities and the Educational Testing Service— made it hard to ignore.
Despite all of the resources dumped into this effort, however, the findings were remarkably similar to what the Center for Public Education discovered in its 2011 report, “Building a Better Evaluation System.”
Among the most important takeaways from that report was the importance of using multiple measures to develop an accurate picture of whether and how much a teacher was contributing to student learning.
Surprise, surprise, the Gates Foundation discovered the same thing and determined that a combination of classroom observations, test scores, and student surveys taken as a whole was a solid indicator of teacher effectiveness.
Certainly, there are still some critics that disagree with the MET study’s whole premise— that data collection and disaggregation can be an effective means for determining effective (and ineffective) teachers. To them, too many outside factors, from a child’s socioeconomic background to the level of parental involvement, impact student growth and makes it impossible to truly ascertain individual teacher quality.
So-called value-added or growth models that attempt to isolate these external variables are not any more reliable, opponents say, because of the huge fluctuations that can occur from year to year.
While value-added models aren’t perfect, CPE’s report found they are a far better than current methods of measuring teacher effectiveness. With time and more data, CPE further noted, those wide swings diminish, providing greater clarity to educators about what is and isn’t working. But determining what’s effective and what’s not is near impossible to do without real data and metrics. This fact is yet another reason why the MET report has commanded and deserves attention— though CPE arrived at the same conclusion for about $45 million less.–Naomi Dillon