Our report on how demographic shifts are changing the cultural landscape of the United States and it’s education system, remains one of our most popular. So, I think you’ll enjoy this recent graphic representation of 2010 U.S. Census data courtesy of Education Week.
February 26, 2013
January 22, 2013
We’re officially at the midway point of the flu season and while we won’t know for months what kind of havoc the influenza wreaked on the U.S., early CDC reports the flu has struck decisively in most parts of the country. What does that mean in real dollars and cents? Check out the below graphic to get an idea of the toll the flu can have on your bottomline.
January 10, 2013
On Tuesday, the Gates Foundation released its third and final report on how (and if) teacher effectiveness can be quantitatively evaluated. Appropriately titled, Measures of Effective Teaching or MET, the findings were hardly earth-shattering but noteworthy nonetheless. Why?
The sheer size of the project— it spanned three years, cost $45 million, studied 3,000 teachers from eight districts across seven states and involved numerous universities and the Educational Testing Service— made it hard to ignore.
Despite all of the resources dumped into this effort, however, the findings were remarkably similar to what the Center for Public Education discovered in its 2011 report, “Building a Better Evaluation System.”
Among the most important takeaways from that report was the importance of using multiple measures to develop an accurate picture of whether and how much a teacher was contributing to student learning.
Surprise, surprise, the Gates Foundation discovered the same thing and determined that a combination of classroom observations, test scores, and student surveys taken as a whole was a solid indicator of teacher effectiveness.
Certainly, there are still some critics that disagree with the MET study’s whole premise— that data collection and disaggregation can be an effective means for determining effective (and ineffective) teachers. To them, too many outside factors, from a child’s socioeconomic background to the level of parental involvement, impact student growth and makes it impossible to truly ascertain individual teacher quality.
So-called value-added or growth models that attempt to isolate these external variables are not any more reliable, opponents say, because of the huge fluctuations that can occur from year to year.
While value-added models aren’t perfect, CPE’s report found they are a far better than current methods of measuring teacher effectiveness. With time and more data, CPE further noted, those wide swings diminish, providing greater clarity to educators about what is and isn’t working. But determining what’s effective and what’s not is near impossible to do without real data and metrics. This fact is yet another reason why the MET report has commanded and deserves attention— though CPE arrived at the same conclusion for about $45 million less.–Naomi Dillon
January so far is looking like Michelle Rhee month. Last night the self-described education reformer was the hour-long focus of PBS’s Frontline series. The day before, her organization StudentsFirst released its report card on the state of education policy in which Rhee and her colleagues “flunked” most states. The headlines wrote themselves (see here and here).
But before we collectively freak out about our own states’ GPA, let’s take a critical look at what StudentsFirst is grading. First — and I can’t emphasize this enough — there are no points awarded for education performance. None. Zero. So if you’re concerned about that ‘F’, Vermont, relax. You are still a high-achieving state.
What they did look for were state education policies that aligned with the StudentsFirst agenda. These include teacher and principal staffing decisions based on student achievement measures among others, and “empowering parents” through charter schools and vouchers. Limiting their rankings to policies, however, leads to some strange juxtapositions.
In the following table, I list the top ten performing states in education as identified in KidsCount, the annual report card published by the Anne E. Casey Foundation. The KidsCount education index includes pre-k participation, NAEP scores in reading and math and high school graduation rates. I then compare these to each state’s StudentsFirst grade:
Not much relationship here between achievement and StudentsFirst policy preferences. When looking only at the StudentsFirst grades in school choice, the relationship is even sketchier: 6 of the top ten states earned an “F” while the highest grade was a “D.”
Brookings Institute released a much less publicized report card before Christmas that graded urban districts on school “choice and competition,” but like StudentsFirst, placed little value on actual performance. Likewise, the Brookings’ rankings look a little wacky when compared to district performance. For example, New York City was ranked second with a letter grade of B+. Yet its eighth-graders performed significantly below the overall national average on NAEP in math. Number three-ranked D.C. (a “B”) was 19th out of 21 urban districts on the same test. In contrast, middle-schoolers in urban Austin exceeded the national average of all students. Brookings gave that Texas school district an F. (Comparable data was not available for number one-ranked New Orleans.)
We can probably go overboard drawing conclusions from these inconsistencies. To begin with, we can’t say for sure that the Brookings/StudentsFirst agendas work against achievement. But we can say one thing: there are many high-profile organizations that are promoting education reform policies that do not have a proven track record to support them.
November 20, 2012
Last month I wrote about how a close look at district staffing data doesn’t support the Milton Friedman Foundation’s claim that districts have been on a spending surge of ‘non-instructional staff’. As a matter of fact, the bulk of the increase in administrator hiring was in the hiring of instructional coordinators and aids, positions that certainly impact student instruction.
To bolster my claim districts have not been using taxpayer money to bolster central office bureaucrats I recently came across financial data from National Center for Education Statistics (NCES) that shows districts are actually spending less of their budgets on administrators than they were two decades ago. In fact in 1989 the average school district spent 11.2 percent of their budgets on administrators which decreased to 10.8 percent in 2009. Over the same time period districts spent relatively the same amount of their budgets on instruction (60.9 percent in 1989 and 61.0 percent in 2009).
Over the past two decades districts have made a greater investment in their student and teacher support staffs by increasing their proportion of district budgets from 8.2 percent to 10.2 percent from 1989 to 2009. Such staff typically includes instructional coordinators and aids that indirectly benefit classroom instruction. It is likely not a coincidence that NAEP achievement increased significantly over this time as well.
While it may be popular to imply that districts are wasting taxpayer money on highly paid administrators that don’t improve student achievement, both data on district staff and expenditures show a much different picture. Districts have been investing more in supporting students and teachers which appears to have had a significant positive impact on student achievement over the past two decades.—Jim Hull
« Newer Posts — Older Posts »