Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

February 7, 2017

School Improvement Grants: Why didn’t $7 billion change results for students?

Mathematica recently released a study of the federal program of Student Improvement Grants (SIG). Their findings? Schools receiving the extra funds showed no significant improvement over similar schools that did not participate. With a price tag of $7 billion (yes, with a “b”), this strikes many as a waste of taxpayer dollars. Interestingly, the study also found no evidence that the SIG schools actually had significantly higher per-pupil expenditures than similar schools that didn’t receive the grants, which may have contributed to the mediocre results.

SIG awarded up to $2 million annually to 1,400 schools, which was administered by states. The program began in the 2010-11 school year and continues through the end of the 2016-17 year. Starting in 2017-2018, the new Every Student Succeeds Act (ESSA) will allow states to use up to seven percent of their Title I allotments to improve the bottom five percent of schools. States may choose to dole out funds via formula or competitive grants, but districts are the ones responsible for using evidence-based practices to improve schools.

Under the old SIG rules, the federal government required schools to choose from one of these four turnaround models:

SIG 1

The new report analyzed transformation, turnaround, and restart models, and found no statistically significant effects for any of them. The authors did find positive, but not statistically significant, effects on math and reading scores for schools receiving the grant, but lower high school graduation rates. Critics of the new report have noted that the mathematical model chosen was not sensitive enough to detect small effects. The authors did find mixed effects each year, which many studies would have the power to find as significant, but due to the design, these remain insignificant. To give perspective of the magnitude of these effects, the effect of decreasing elementary class sizes by seven students is about 0.2 standard deviations; the effect of urban charter schools compared to their neighborhood schools after one year is 0.01 in math and -0.01 in reading (0.15 and 0.10 after four years). According to the Mathematica study, the results of SIG in 2012-2013 were 0.01 standard deviations in math and 0.08 standard deviations in reading, with a drop of in the graduation rate (note that SIG had a positive impact on the graduation rate in 2011-2012, which suggests that these results are not statistically significant, or could be zero). Not enough to conclude a positive effect, for sure, but not nothing, either.

 

SIG3

I’ll offer a couple of my own thoughts (based on research, of course) on why SIG didn’t have the success that was hoped for:

1. The authors found no evidence that the grant funds actually increased per-pupil spending. In government-speak, the funds may have supplanted other funding streams instead of supplementing them, even though the law states that federal funds are supposed to supplement other funds spent. They found that SIG schools spent about $245 more per student than similar non-SIG schools in 2011-2012, and only $100 more in 2012-2013 (again the results are not statistically significant, meaning that we can’t confidently say that the difference isn’t zero). Recent studies have shown that spending makes a difference in education, so this may help explain why we didn’t see a difference here.

2. Students in many priority schools (the bottom five percent of schools), which are the ones that qualified for SIG grants, may have had the option to transfer to higher-performing schools. While the report doesn’t address this, it seems that students with more involved parents and better academic achievement may have been more likely to utilize this offer, thus lowering the average scores of the schools they left behind. Students perform better when surrounded with higher-performing peers, which means that the lack of overall effect could have been influenced by the loss of higher achieving students.

3. Schools receiving SIG grants were high-poverty and high-minority. The average rate of students eligible for free-and-reduced price (FRL) lunches in the study group was 83 percent, with non-white students making up 91 percent of the school populations (as compared with the overall school population being about 50 percent FRL-eligible and 50 percent non-white). While the resources allocated through SIG to these schools should have made spending more equitable, schools may have still struggled with recruiting and retaining experienced, qualified teachers, which is often a challenge for high-poverty, high-minority schools. Research is clear that integrated schools have better outcomes for students than segregated schools. Yet, the reform strategies used under SIG (replacing school staff and/or converting to a charter school) did little to improve school integration.

Hopefully, states and districts will learn from these lessons and use school reforms that fundamentally change the practices of the school, not just a few personnel: increased funding, school integration, changes in instructional practices, meaningful teacher/principal mentoring and development, and/or wrap-around services for students in poverty or who have experienced trauma.






January 31, 2017

Get the facts on school segregation

School “resegregation” has been in the news lately, but is it real?  Are our schools becoming less diverse, even as our student body becomes increasingly so?

We tackle these questions, as well as multiple others, in our new report, “School Segregation Then & Now: How to move toward a more perfect union.”

  • Are integrated schools better for students?
  • How does race interact with socioeconomic status in school enrollments?
  • How do you measure integration?
  • How does segregation affect the distribution of resources, such as teachers and funding?
  • What can school districts do to create more diverse schools?

We hope that you will find this report informative and inspiring, as we aim to strengthen our schools and our society.

 

10901-4729 CPE Segregation FB






January 27, 2017

7 reasons why school choice ≠ school reform

I attended an event this week on Race, Poverty, and School Reform, and I was surprised to hear almost every panelist discuss choice as the best way to reform schools. Research doesn’t support their claims, however.  While choice is great and helps parents find programs and schools that best fit their children’s needs, it is not the panacea to all challenges in education.  Choice doesn’t always have to be outside of the traditional public school system, either.  Finally, choice is not reform in that parental choice of school doesn’t always result in better outcomes for their students.

  1. About 87 percent of America’s school-age children are in public schools, including the five percent in charter schools. We’ve spent decades creating systems to serve students, and those aren’t likely to go away soon. So, if we want to improve outcomes for students today, we have to work within that system.

 

  1. Traditional school districts offer many students choices. Thirty-seven percent of all parents reported having choices within their local public schools in 2012. This includes magnet schools, charters (both district-run and others), and districts offering flexible attendance zones or transfers.  Many districts offer specialized schools and programs such as dual-language immersion, STEM, or the arts.

 

  1. Charter schools aren’t necessarily better than traditional public schools. CREDO found that only about a quarter of charter schools outperform their local counterparts, while in reading, 19 percent of charters perform worse than their local traditional school, and 31 percent perform worse in math. Granted, charters in urban settings and those that serve students in poverty do tend to outperform their local counterparts, but part of this is due to poorly performing traditional public schools in these regions.  Even with this growth, most poor and urban students in charters are not catching up with their more advantaged peers.  And, while the overall average is positive, traditional schools outperformed charters in about one-third of the cities studied.  So, while charters may be a good option for some, they are not across-the-board saviors for student achievement.

School Choice 1

  1. School choice in any form (school districts, charter, and vouchers) can make segregation worse, which has negative impacts on students’ achievement and life outcomes. While there are some charters that are intentionally diverse, only four states (Mississippi, Nevada, North Carolina, and South Carolina) have state laws that require charter schools to reflect the makeup of their local traditional public schools to some degree. Very few public school districts utilize controlled choice models that aim to balance parental choice with diverse school populations.  Research also shows that parents tend to choose schools schools based on school location and demographics that match their own .

 

  1. Private schools aren’t necessarily better than traditional schools, either. Results are hard to measure, as most programs don’t require private schools to participate in state tests. High school graduation rates are generally higher, but that may also be due to admissions-based cream-skimming and/or relaxed graduation requirements (this is just speculation, echoed from other researchers).  While some programs have shown positive results (New York, DC), others have harmed student achievement.  Students in the Louisiana voucher program dropped significantly in achievement, dropping 16 percentile points in math and eight in reading.  Some studies have shown that private schools perform worse than public schools if demographic factors are accounted for.

    Impact of Louisiana Voucher Program on Student Achievement after 2 years

School Choice 2

 

  1. School choice in the form of public school vouchers doesn’t always serve every student. Very few voucher programs require private school providers to adhere to IDEA laws for special education students (outside of programs that cater specifically to special education students), and no states require participating schools to address the needs of English language learners. Voucher laws allow private schools to adhere to their admission criteria, which encourages more schools to participate.  However, these criteria often discriminate against students based on their religion and sexual orientation (only Maine and Vermont prohibit religious schools from participating).  Some private schools may also have extra fees for sports or other programs, which may exclude low-income families from participating in the program.  Few voucher programs provide transportation, which may also be limiting.

 

  1. Full-time virtual schools, which serve about 180,000 students nationwide, have been shown to grossly underperform other forms of schools. Only two percent of virtual schools outperformed their traditional public school counterpart in reading, and zero percent had better results in math. CREDO estimates that attending a virtual school is the equivalent of not attending school at all for a year in math, and of losing 72 days of instruction in reading.

School Choice 3

School choice can be great for some families and some students.  However, the reality is that just because parents choose schools doesn’t mean that that school will do better for student achievement overall.  While some education reformers are pushing for increased school choice as a way to improve education, the research just doesn’t support this notion, at least not in the current framework.  What we should be doing is learning from high-performing schools in every sector (traditional, charter, and private) to replicate effective administrative and instructional practices.  While competition itself may someday push schools to improve, that doesn’t help today’s students, and there’s no guarantee that competition makes schools better, anyway.  Today’s students deserve true reform based on evidence, not ideology, so that they receive the best education possible.






December 7, 2016

PISA scores remain stagnant for U.S. students

The results of the latest PISA or the Program for International Student Assessment are in and as usual, we have an interpretation of the highlights for you.

If you recall, PISA is designed to assess not just students’ academic knowledge but their application of that knowledge and is administered to 15-year-olds across the globe every three years by the U.S. Department of Education’s National Center for Education Statistics (NCES) in coordination with the Paris-based Organisation for Economic Cooperation and Development (OECD). Each iteration of the PISA has a different focus and the 2015 version honed in on science, though it also tested math and reading proficiency among the roughly half-million teens who participated in this round. So, how did American students stack up?

In short, our performance was average in reading and science and below average in math, compared to the 35 other OECD member countries.  Specifically, the U.S. ranked 19th in science, 20th in reading and 31st in math. But PISA was administered in countries beyond OECD members and among that total group of 70 countries and education systems (some regions of China are assessed as separate systems), U.S. teens ranked 25th in science, 22nd in reading, and 40th in math.  Since 2012, scores were basically the same in science and reading, but dropped 11 points in math.

PISA Science

Before you get too upset over our less-than-stellar performance, though, there are a few things to take into account.  First, scores overall have fluctuated in all three subjects.  Some of the top performers such as South Korea and Finland have seen 20-30 point drops in math test scores from 2003 to 2015 at the same time that the U.S. saw a 13 point drop.  Are half of the countries really declining in performance, or could it be a change in the test, or a change in how the test corresponds with what and how material is taught in schools?

Second, the U.S. has seen a large set of reforms over the last several years, which have disrupted the education system.  Like many systems, a disruption may cause a temporary drop in performance, but eventually stabilize.  Many teachers are still adjusting to teaching the Common Core Standards and/or Next Generation Science Standards; the 2008 recession caused shocks in funding levels that we’re still recovering from; many school systems received waivers from No Child Left Behind which substantially change state- and school-level policies.  And, in case you want to blame Common Core for lower math scores, keep in mind that not all test-takers live in states that have adopted the Common Core, and even if they do, some have only learned under the new standards for a year or two.  Andreas Schleicher, who oversees the PISA test for the OECD, predicts that the Common Core Standards will eventually yield positive results for the U.S., but that we must be patient.

Demographics

Student scores are correlated to some degree with student poverty and the concentration of poverty in some schools.  Students from disadvantaged backgrounds are 2.5 times more likely to perform poorly than advantaged students.  Schools with fewer than 25 percent of students who are eligible for free or reduced price lunch (about half of all students nationwide are eligible) would be 2nd in science, 1st in reading, and 11th in math out of all 70 countries.  At the other end of the spectrum, schools with at least 75 percent of students who are eligible for free or reduced price lunch, 44th in science, 42nd in reading, and 47th in math.  Compared only to OECD countries, high-poverty schools would only beat four countries in science, four countries in reading, and five in math.

Score differences for different races in the U.S. show similar disparities.

How individual student groups would rank compared to the 70 education systems tested:

Science Reading Math
White 5th 4th 20th
Black 49th 44th 51st
Hispanic 40th 37th 44th
Asian 8th 2nd 20th
Mixed Race 19th 20th 38th

 

Equity

Despite the disparities in opportunity for low-income students, the number of low-income students who performed better than expected increased by 12 percentage points since 2006, to 32 percent.  The amount of variation attributable to poverty decreased from 17 percent in 2006 to 11 percent in 2015, meaning that poverty became less of a determining factor in how a student performed.

Funding

America is one of the largest spenders on education, as we should be, given our high per capita income.  Many have bemoaned that we should be outscoring other nations based on our higher spending levels, but the reality is that high levels of childhood poverty and inequitable spending often counteract the amount of money put into the system.  For more info on this, see our previous blogpost.






November 17, 2016

What does “evidence-based” mean?

The Every Student Succeeds Act requires schools to use “evidence-based interventions” to improve schools.  The law also includes definitions of what evidence means, and recent guidance from the Department of Education has provided additional clarification on what passes as “evidence-based.”  Mathematica has also put out a brief guide on different types of data that have similar categories as the Department of Education, but also provide explanations for data we may see in the media or from academic researchers that do not qualify as hard data but can still help us understand policies and programs.

ESSA Evidence

What follows is a brief summary of what qualifies as “evidence-based” starting with the strongest first:

Experimental Studies:  These are purposefully created experiments, similar to medical trials, that randomly assign students to treatment or control groups, and then determine the difference in achievement after the treatment period.  Researchers also check to make sure that the two groups are similar in demographics.  This is considered to be causal evidence because there is little reason to believe the two similar groups would have had different outcomes except for the effect of the treatment.  Studies must involve at least 350 students, or 14 classrooms (assuming 25 students per class) and include multiple sites.

Quasi-experimental Studies:  These still have some form of comparison group, which may be between students, schools, or districts that have similar demographic characteristics.  However, even groups that seem similar on paper may still have systematic differences, which makes evidence from quasi-experimental studies slightly less reliable than randomized studies.  Evidence from these studies are often (but not always) considered to be causal, though experiment design and fidelity can greatly affect how reliable these conclusions are across other student groups.  Studies must involve at least 350 students, or 14 classrooms (assuming 25 students per class) and include multiple sites.

Correlational Studies: Studies that result in correlational effects can’t necessarily prove that a specific intervention caused students in a particular program to have a positive/negative effect.  For example, if Middle School X requires all teachers to participate in Professional Learning Communities (PLCs), and they end up with greater student improvement than Middle School Y, we can say that their improved performance was correlated with PLC participation.  However, there could have also been other changes at the school that truly caused the improvement, such as greater parental participation, so we cannot say that the improvement was caused by PLCs, but that further study should be done to see if there is a causal relationship.  Researchers still have to control for demographic factors; in this example, Middle School X and Middle School Y would have to be similar in both their teacher and student groups.

With all studies, we also have to consider who was involved and how the program was implemented.  A good example of this is the class-size experiment performed in Tennessee in the 1980s.  While their randomized control trial found positive effects of reducing class size by an average of seven students per class, when California reduced class sizes in the 1990s they didn’t see as strong of effects.  Part of this was implementation – reducing class sizes means hiring more teachers, and many inexperienced, uncertified teachers had to be placed in classrooms to fill the gap, which could have reduced the positive effect of smaller classes.  Also, students in California may be different than students in Tennessee; while this seems less likely for something like class size, it could be true for more specific programs or interventions.

An additional consideration when looking at evidence is not only statistical significance (whether or not we can be certain that the effect of a program wasn’t actually zero, using probability), but the effect size.  If an intervention has an effect size of 0.01 standard deviations* (or other units), it may only translate to the average student score changing a fraction of a percentage point.  We also have to consider if that effect is really meaningful, and if it’s worth our time, money, and effort to implement, or if we should look for a different intervention with greater effects.  Some researchers would say that an effect size of 0.2 standard deviations is the gold standard for really making meaningful changes for students.  However, I would also argue that it depends on the cost, both of time and money, of the program.  If making a small schedule tweak could garner 0.05 standard deviations of positive effect, and cost virtually nothing, then we should do it.  In conjunction with other effective programs, we can truly move the needle for student achievement.

School administrators should also consider the variation in test scores.  While most experimental studies report on the mean effect size, it is also important to consider how high- and low-performing students fared in the study.

Evidence is important and should guide policy decisions.  However, we have to keep in mind its limitations and be cautious consumers of data to make sure that we’re truly understanding how the study was done to see if its results are valid and can translate to other contexts.

 

*Standard deviations are standardized units used to help us compare programs, considering that most states and school districts use different tests.  The assumption is that most student achievement scores follow a bell curve, with the average score being at the top of the curve.  In a standard bell curve, a change of one standard deviation for a student at the 50th percentile would bump him/her up to 85th percentile, or down to the 15th percentile, depending on the direction of the change.  A report of the effect size of a program typically indicates how much the mean of the students who participated in the program changed from the previous mean or changed from the group of students who didn’t receive the program.

Filed under: CPE,Data,ESSA — Tags: , — Chandi Wagner @ 3:39 pm





Older Posts »
RSS Feed