Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

June 20, 2013

A false comparison

I have a lot of respect for Dr. Linda Darling-Hammond (LDH). She is a wonderful advocate for teachers and public schools in general. She not only highlights the strengths of our teachers and our public schools but is continually finding methods to improve them as well. In fact, she is one of the reasons I got into education policy research.

However, from time to time when she is advocating for teachers and our public schools she makes false comparisons that one wouldn’t expect from a respected researcher. She made such a comparison in her critique of the recent National Center on Teacher Quality’s report card on schools of education which found the vast majority of schools of education to be of poor quality.

LDH took exception to the findings and pointed out several flaws in the report card’s methodology and data collection. However, her criticism that the report card did not connect schools of education to the actual effectiveness of their graduates, while a fair point, was not backed by sufficient evidence. Specifically she stated:

In this study, the highest-achieving states on the National Assessment of Educational Progress (NAEP)— including Massachusetts, Vermont, New Hampshire, Maine, New Jersey, and Minnesota — all got grades of C or D, while low-achieving Alabama got the top rating from NCTQ.  It is difficult to trust ratings that are based on criteria showing no relationship to successful teaching and learning.”

As the former president of the American Education Research Association (AERA) she knows that just because Massachusetts’ students score higher on NAEP than students in Alabama it doesn’t mean teachers in Massachusetts are more effective than teachers in Alabama. While teachers have a significant impact on student achievement and can close achievement gaps, student background factors like their socioeconomic status are highly correlated to a student’s overall achievement level. This is the rationale behind measuring student’s annual growth, which has less to do with their background characteristics and more to do with the effectiveness of their teacher. Since Alabama has many more students from disadvantaged backgrounds than Massachusetts, Vermont, New Hampshire, Maine, New Jersey, and Minnesota, it is possible that students in Alabama made greater gains while in school than students in the high-performing states but they just started school at a lower level so had lower overall achievement scores.

LDH knows a more accurate way to compare the effectiveness of teachers in Alabama to their peers in other states like Massachusetts is by using a value-added model that measures a teacher’s effectiveness based on the achievement growth of their students while controlling for student background characteristics. Value-added measures may not be perfect but they provide a much more accurate comparison than judging the quality of a state’s teachers by their overall NAEP scores.

While I agree with LDH that schools of education should be evaluated based on the effectiveness of their graduates once they are in the classroom, such data is not possible at the moment. In fact, schools of education have not exactly embraced such attempts to evaluate them as such. If schools of education want to be more fairly evaluated they should do it themselves and be more transparent about how effective they really are. Taxpayers are sending millions of dollars in the form of subsidized loans and loan forgiveness programs to these schools with no evidence of a return on their investment. With nearly half of teachers leaving the profession within five years and research consistently showing the lack of effectiveness of most first-year teachers, the evidence we do have shows teachers are not prepared for their first year in the classroom. Unfortunately, we have relied on teachers ‘learning by doing’ instead of our schools of education adequately preparing them to be effective their first day in the classroom.

Most of our teachers are very good but most didn’t start off that way. If all our current teachers were nearly as effective in their first year as in their fifth year, the U.S. would have one of the most effective teaching forces in the world.– Jim Hull

Filed under: Growth Models,NAEP,Public education,Teacher evaluation,teachers — Tags: — Jim Hull @ 9:22 am





4 Responses to “A false comparison”

  1. Robin Kuykendall says:

    Not as sure as Mr. Hull about what Prof. Darling-Hammond knows, but neither does this article show us where on the scale that Alabama students entered the educational process. Neither do we know, from here, if Alabama teacher preparations represent more Alabama or Massachusetts institutions. That might make a difference, too.

  2. Jim Hull says:

    You are correct Robin, we don’t know how prepared Alabama studetents were when they entered the educational process as compared to students in the higher performing states. That information among other data is needed to make any valid comparisons of the quality of teachers between states.

  3. Joan C. Grim says:

    “This is the rationale behind measuring student’s annual growth, which has less to do with their background characteristics and more to do with the effectiveness of their teacher.”
    This statement is incorrect on many levels.

    SGM’s do not measure teacher effectiveness and were never designed to do so. SGM’s are designed to measure a child’s growth relative to his/her age.
    Assessment scores used for purposes for which they were not designed invalidates the measure. As such, teacher quality should not be determined by SGMs. Assuming SGMs filter out past learning and isolate teacher effects is as valid as assuming phrenology predicts personality.

  4. Jim Hull says:

    You are correct Joan that Student Growth Percentiles (SGP) are not measures of teacher effectiveness. This is why I stated that value-added measures provided a much more accurate measure of teacher effectiveness. A high quality alue-added measure does take into account each student’s previous achievement and other factors to isolate the impact they had on their students’ achievement growth.

    Unfortunately, a number of states are now evaluating teachers using student growth percentiles that make no attempt to isolate the impact of a teacher on their students’ performance.

Leave a Reply


RSS Feed