Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

May 2, 2013

CPE’s new report examines the strategies behind the “turnaround”

The Center for Public Education is excited to announce the release of a new report, “Which Way Up: What research says about school turnaround strategies.” The title is a play on the plethora of strategies aimed at improving the lowest performing schools in the country.

This is a worthy goal, perhaps the important mission of public schools: to ensure all students receive a world-class education. But challenges abound and some schools, for a variety of reasons, fail to deliver on the promise, giving rise to a wave of reform models known simply as the turnaround. The problem is that many of these efforts have relied on strategies that have produced mixed results, if any at all. We called upon education researcher and writer team Eileen M. O’Brien and Charles J. Dervarics to take a closer look at the research and here’s what they found.

Although individual states and cities have attempted to address chronically low-achieving schools over the years, the US Department of Education’s School Improvement Grant (SIG) program is the largest undertaking in the school turnaround arena. A relic of the No Child Left Behind Act, it received a significant funding boost (some $3.5 billion) in 2009, thanks to the federal stimulus bill known as the American Recovery and Reinvestment Act.

Like Race to the Top, the department made SIG into a competitive grant program and required grant seekers to choose among four different intervention models to secure the funds:

  • The school closure model, in which the low-performing school is closed and students move to a higher achieving school.
  • The restart model, in which the school becomes a charter or is taken over by an education management organization.
  • The transformation model, in which the school replaces the principal, provides enhanced professional development to staff, launches a teacher evaluation system, increases learning time, and creates new support services for students.
  • The turnaround model, which includes many of the same elements as the transformation model with the additional requirement that teachers must reapply for their jobs. A turnaround school must replace at least 50 percent of the staff and grant the new principal greater autonomy to pursue reforms.

First-year data on SIG award recipients show some positive gains, particularly at the elementary level and in reading. But one year’s worth of data is hardly a trend and the achievement data was not broken down by reform model, which would’ve provided greater insight into what strategies seem to be the most effective.

Previous research on some of these strategies has been a little more enlightening. For instance, research is pretty clear about the impact of school closures on student achievement: better performing schools produce gains, lower-performing ones don’t. The research on charter schools, a hallmark of the restart model, is also fairly definitive:  charter schools, on average, perform no better or worse than their traditional school counterpart.

Evidence on the transformation model, which is far and away the most popular model among SIG recipients, is mixed and confounded by the great latitude schools are given in implementation—good for schools but hard for researchers who are, of course, interested in identifying and evaluating effectiveness.

Even more worrisome than the large-scale federal push toward strategies that are either untested or have shown mixed results on reversing chronically low achieving schools is the adoption of some of these strategies— school closure, conversion to charter, replacement of majority of staff— into parent trigger laws.

While we can’t and shouldn’t lessen our focus on helping the country’s lowest-achieving schools deliver on public education’s promise to all students, we should be mindful and methodical about what it is we’re investing in to get them there.






March 7, 2013

John Stossel, funky charts and Simpson’s paradox

John Stossel was on Fox and Friends this morning to promote an upcoming show about public schools. Remember, this is the guy who gave us Stupid in America – his ABC documentary from a few years back about our allegedly failing schools. During his segment, he claimed that “America has tripled spending, but test scores haven’t improved.”  The culprits? Teachers unions, school boards and other unnamed bureaucrats. Viewers were then shown a graph that indeed featured a flat line representing test scores over 40 years (improvement 1 point) with a second line escalating to $149,000 over the same period. The source was given as NCES. This got my fact-checking synapses sparking.

While I could not find the exact graph they showed on TV, Stossel did post this rather snazzy display on his blog with the same data:

Go ahead and take a moment to admire the work of the Fox News graphics department. Ok, now let’s talk data. This chart shows scores for three subjects (math, reading and science) and dollar figures (the “cost of education”) from 1970 to 2010. While not noted, I’m assuming the data source is still NCES.

This may get a little wonky, but stay with me.  NCES reports trend data over four decades for only two tests:  the National Assessment of Educational Progress (NAEP) Long-Term Trends (LTT) and the SAT. NCES also has international test scores, but that data only goes back to the 1990s so that couldn’t be what Stossel used.  The SAT does not assess science, which leaves NAEP LTT as the only possibility. It’s not a perfect match. The last NAEP LTT administration was in 2008 although Stossel’s chart shows data to 2010. But I’m going to assume that he fudged a little on the timeframe because nothing else qualifies.

NAEP LTT is given to a representative sample of students age nine, 13 and 17. I’m also going to assume that his analysis is based on 17-year-olds because the data matches his in reading and comes closest in math (more on this later).  Between 1971 and 2008, LTT reading scores for 17-year-olds have been relatively flat, posting an increase of just 1 point (not 1% as shown on Stossel’s chart, but we’ll blame the designer for that common mistake).  Here’s what it looks like:

Now let’s have some fun. Let’s look at the same test scores disaggregated by race and ethnicity:

Note that every group improved more than the overall score did: White 17-year-olds by 2 points with their Black and Hispanic classmates gaining a whopping 25 and 17 points respectively. This gives me a chance to talk about Simpson’s paradox.  The paradox occurs when “a trend that appears in different groups of data disappears when these groups are combined, and the reverse trend appears for the aggregate data.”  In this case, the overall trend for 17-year-olds is flat while each group gained, some groups by a lot. The reason is that the distribution of racial/ethnic groups has changed significantly between 1975 and 2008. Here is the distribution of the NAEP samples for the two years:

The proportion of Black and Hispanic 17-year-olds is larger while the proportion of White students in 2008 is 25 percentage points lower than it was in 1975. Even though Black and Hispanic performance also increased by a lot, they were still lower-performing than their White peers in 2008. Thus, all groups gain, but when their performance is combined the overall trend is flat.

Clearly, no one would argue that an achievement gap, though improving, is acceptable and we can move on to other things. But it’s just as absurd to look at these gains and find evidence of failing schools, as Stossel does.  And the absurdity doesn’t end there. Stossel, in turns out, is a master cherry picker of data. Let’s look at the rest of NAEP Long Term Trends:

  • Reading, 13-year-olds, 1971-2008: Overall scores +12; Black students +23; Hispanic +24.
  • Reading, 9-year-olds, 1971-2008: Overall +5; Black +21, Hispanic +10.
  • Mathematics, 17-year-olds, 1978 (first year tested)-2008: Overall +6, Black +19, Hispanic +17.
  • Mathematics, 13-year-olds, 1978-2008: Overall +17, Black +32, Hispanic +17.
  • Mathematics, 9-year-olds,  1978-2008: Overall +24, Black +32, Hispanic +30.

Notice a pattern?  If one were to apply Stossel’s grossly oversimplified analysis of education cost to scores — and I’m not saying you should — but if you did, you would have to say our public schools are producing a return on our investment.   Then again, how he got those cost figures is another topic for another day.

Filed under: Achievement Gaps,Data,Demographics,Public education — Tags: , , , — Patte Barth @ 2:46 pm





February 26, 2013

The changing face of America and its schools

Our report on how demographic shifts are changing the cultural landscape of the United States and it’s education system, remains one of our most popular. So, I think you’ll enjoy this recent graphic representation of 2010 U.S. Census data courtesy of Education Week.

Filed under: CPE,Data — Tags: , , , — NDillon @ 3:21 pm





January 22, 2013

The flu season in graphics

We’re officially at the midway point of the flu season and while we won’t know for months what kind of havoc the influenza wreaked on the U.S., early CDC reports the flu has struck decisively in most parts of the country. What does that mean in real dollars and cents? Check out the below graphic to get an idea of the toll the flu can have on your bottomline.

 

 

Filed under: Data — Tags: , , — NDillon @ 2:43 pm





January 10, 2013

Gates Foundation report mirrors CPE’s findings

On Tuesday, the Gates Foundation released its third and final report on how (and if) teacher effectiveness can be quantitatively evaluated. Appropriately titled, Measures of Effective Teaching or MET, the findings were hardly earth-shattering but noteworthy nonetheless. Why?

The sheer size of the project— it spanned three years, cost $45 million, studied 3,000 teachers from eight districts across seven states and involved numerous universities and the Educational Testing Service— made it hard to ignore.

Despite all of the resources dumped into this effort, however, the findings were remarkably similar to what the Center for Public Education discovered in its 2011 report, “Building a Better Evaluation System.”

Among the most important takeaways from that report was the importance of using multiple measures to develop an accurate picture of whether and how much a teacher was contributing to student learning.

Surprise, surprise, the Gates Foundation discovered the same thing and determined that a combination of classroom observations, test scores, and student surveys taken as a whole was a solid indicator of teacher effectiveness.

Certainly, there are still some critics that disagree with the MET study’s whole premise— that data collection and disaggregation can be an effective means for determining effective (and ineffective) teachers. To them, too many outside factors, from a child’s socioeconomic background to the level of parental involvement, impact student growth and makes it impossible to truly ascertain individual teacher quality.

So-called value-added or growth models that attempt to isolate these external variables are not any more reliable, opponents say, because of the huge fluctuations that can occur from year to year.

While value-added models aren’t perfect, CPE’s report found they are a far better than current methods of measuring teacher effectiveness. With time and more data, CPE further noted, those wide swings diminish, providing greater clarity to educators about what is and isn’t working. But determining what’s effective and what’s not is near impossible to do without real data and metrics. This fact is yet another reason why the MET report has commanded and deserves attention— though CPE arrived at the same conclusion for about $45 million less.–Naomi Dillon

 






« Newer PostsOlder Posts »
RSS Feed