Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

November 17, 2016

What does “evidence-based” mean?

The Every Student Succeeds Act requires schools to use “evidence-based interventions” to improve schools.  The law also includes definitions of what evidence means, and recent guidance from the Department of Education has provided additional clarification on what passes as “evidence-based.”  Mathematica has also put out a brief guide on different types of data that have similar categories as the Department of Education, but also provide explanations for data we may see in the media or from academic researchers that do not qualify as hard data but can still help us understand policies and programs.

ESSA Evidence

What follows is a brief summary of what qualifies as “evidence-based” starting with the strongest first:

Experimental Studies:  These are purposefully created experiments, similar to medical trials, that randomly assign students to treatment or control groups, and then determine the difference in achievement after the treatment period.  Researchers also check to make sure that the two groups are similar in demographics.  This is considered to be causal evidence because there is little reason to believe the two similar groups would have had different outcomes except for the effect of the treatment.  Studies must involve at least 350 students, or 14 classrooms (assuming 25 students per class) and include multiple sites.

Quasi-experimental Studies:  These still have some form of comparison group, which may be between students, schools, or districts that have similar demographic characteristics.  However, even groups that seem similar on paper may still have systematic differences, which makes evidence from quasi-experimental studies slightly less reliable than randomized studies.  Evidence from these studies are often (but not always) considered to be causal, though experiment design and fidelity can greatly affect how reliable these conclusions are across other student groups.  Studies must involve at least 350 students, or 14 classrooms (assuming 25 students per class) and include multiple sites.

Correlational Studies: Studies that result in correlational effects can’t necessarily prove that a specific intervention caused students in a particular program to have a positive/negative effect.  For example, if Middle School X requires all teachers to participate in Professional Learning Communities (PLCs), and they end up with greater student improvement than Middle School Y, we can say that their improved performance was correlated with PLC participation.  However, there could have also been other changes at the school that truly caused the improvement, such as greater parental participation, so we cannot say that the improvement was caused by PLCs, but that further study should be done to see if there is a causal relationship.  Researchers still have to control for demographic factors; in this example, Middle School X and Middle School Y would have to be similar in both their teacher and student groups.

With all studies, we also have to consider who was involved and how the program was implemented.  A good example of this is the class-size experiment performed in Tennessee in the 1980s.  While their randomized control trial found positive effects of reducing class size by an average of seven students per class, when California reduced class sizes in the 1990s they didn’t see as strong of effects.  Part of this was implementation – reducing class sizes means hiring more teachers, and many inexperienced, uncertified teachers had to be placed in classrooms to fill the gap, which could have reduced the positive effect of smaller classes.  Also, students in California may be different than students in Tennessee; while this seems less likely for something like class size, it could be true for more specific programs or interventions.

An additional consideration when looking at evidence is not only statistical significance (whether or not we can be certain that the effect of a program wasn’t actually zero, using probability), but the effect size.  If an intervention has an effect size of 0.01 standard deviations* (or other units), it may only translate to the average student score changing a fraction of a percentage point.  We also have to consider if that effect is really meaningful, and if it’s worth our time, money, and effort to implement, or if we should look for a different intervention with greater effects.  Some researchers would say that an effect size of 0.2 standard deviations is the gold standard for really making meaningful changes for students.  However, I would also argue that it depends on the cost, both of time and money, of the program.  If making a small schedule tweak could garner 0.05 standard deviations of positive effect, and cost virtually nothing, then we should do it.  In conjunction with other effective programs, we can truly move the needle for student achievement.

School administrators should also consider the variation in test scores.  While most experimental studies report on the mean effect size, it is also important to consider how high- and low-performing students fared in the study.

Evidence is important and should guide policy decisions.  However, we have to keep in mind its limitations and be cautious consumers of data to make sure that we’re truly understanding how the study was done to see if its results are valid and can translate to other contexts.

 

*Standard deviations are standardized units used to help us compare programs, considering that most states and school districts use different tests.  The assumption is that most student achievement scores follow a bell curve, with the average score being at the top of the curve.  In a standard bell curve, a change of one standard deviation for a student at the 50th percentile would bump him/her up to 85th percentile, or down to the 15th percentile, depending on the direction of the change.  A report of the effect size of a program typically indicates how much the mean of the students who participated in the program changed from the previous mean or changed from the group of students who didn’t receive the program.

Filed under: CPE,Data,ESSA — Tags: , — Chandi Wagner @ 3:39 pm





November 2, 2016

Thoughts on nuance and variance

As we approach the 2016 general election, I’ve heard public officials, family, and friends make very clear statements regarding which side of the aisle they support.  Yet, I find it hard to believe that the average American falls in line 100% with either political party, or supports every word and tenet of a particular public policy.  We are nuanced people.  Very few issues are as black-and-white as we’d like them to be.  Here’s a guide for things to consider when considering your stance on a particular issue, candidate, or political party, put in the context of educational issues.

  1. Most issues have an “it depends” clause.

With the onslaught of information available today, it makes sense that we want answers that are black-and-white.  The reality, though, is that there’s gray area for most policies and practices.  We also have to balance our ideological values with evidence.  Charter school proponents may believe in free-market values and choice to improve public schools through vouchers and charter schools, but I haven’t seen widespread evidence that choice in and of itself actually improves academic achievement or long-term outcomes in significant ways.  Yes, there are individual students who have benefited, but there are also individual students who have lost out.  Charter school opponents claim that taking away publicly-elected oversight through school boards is detrimental to the public’s ability to provide free and quality education to all.  Yet, the reality is that some public schools have dismal records, and charter or private schools have sometimes had success with the same students.  We have to acknowledge that we all want good things for our kids, and then use the evidence to figure out what that looks like without demonizing the other side.

  1. Most policies rely heavily on the quality of their implementation to be successful.

Common Core seems to be a prime example of this.  Two-thirds of Americans are in support of some sort of common standards across the country.  Yet, barely half of Americans are in support of Common Core.  Support for both questions have dwindled significantly from about 90% of public support in 2012.  Even presidential candidate Hillary Clinton has called the roll-out of Common Core “disastrous,” despite supporting them overall.

CommonCore

Source: http://educationnext.org/ten-year-trends-in-public-opinion-from-ednext-poll-2016-survey/

They were implemented quickly in many states, often without the curriculum materials or professional development to help teachers succeed in teaching the new standards.  While support for Common Core seems to be leveling off with teachers, who are most familiar with them, several states have repealed or are considering repealing the Common Core.  The new state standards that have been written in South Carolina and Indiana are extremely similar to the Common Core, which means that it may not be the concept or content that people disagree with so much as how they were implemented and the ensuing political backlash.

 

  1. Statistics usually tell us about an average (the typical student) but variance is also important.

Charter schools are a prime example of this.  On average, they have similar student achievement outcomes as traditional public schools.  But, there are schools that outperform their counterparts and schools that woefully underperform.  We have to think about those schools, too.

This is also clear in school segregation.  The average black student in the U.S. attends a school that is 49% black, 28% white, 17% Latino, 4% Asian, and 3% “Other,” but that doesn’t mean that every black student has this experience.  At the edges of the spectrum, however, 13% of U.S. public schools are over 90% black and Latino, while 33% of schools are less than 10% black and Latino.  To understand the reality, we need to look at the variety of students’ experiences (known in statistic-speak as “variance”) not just the average.

  1. There’s always room for improvement. “Fixing” a policy may mean making adjustments, not abandoning it altogether.

Student assessments under No Child Left Behind (2001) resulted in the narrowing of curriculum.  But, we also learned more about disadvantaged student groups and have continued closing the achievement gap for students of color.  Should we throw out testing altogether? Some would say yes, but most Americans say no.  Graduation rates, college enrollment, and achievement scores have all increased since NCLB passed in 2001.  What we can do is improve on student assessments.  Adjusting consequences for students, teachers, and schools could result in less narrowing of curriculum and subjects taught.  Involving more well-rounded tests that encourage creative and critical thinking would help teachers emphasize these skills in class.  Continued improvement in data use can help teachers and school administrators adjust their practices and policies to see continued student growth.  States have the power to make some of these changes under the new Every Student Succeeds Act without dismantling gains made under No Child Left Behind.






October 28, 2016

NAEP science scores reveal progress at lower grades, stagnation at 12th-grade

The National Assessment Governing Board released its 2015 Science scores from the National Assessment of Educational Progress for fourth-, eighth- and 12th-graders.  The results were positive overall, with achievement gaps narrowing and scores improving for almost all student groups in fourth and eighth-grade students.  Twelfth-grade scores remained stagnant across all groups.

The tests assess students’ ability to identify and use science principles, use scientific inquiry, and use technological design in physical science, life science, and Earth and space science.  Student responses are a combination of written (including multiple choice and open-ended questions) and interactive computer and hands-on tasks.

Of great concern, however, are the persistent gaps between students of different races, genders, and education status (English Learners and Special Education students).  While these gaps are narrowing, we have to figure out how to provide greater opportunities to all students.

NAEP_Science_Gaps

http://nationsreportcard.gov/science_2015/files/overview.pdf

 

Some states are improving at faster rates than others (only noted if the state had a statistically significant change):

NAEP_Science_States

Source: www.nationsreportcard.gov

 

It seems that the gains made in fourth and eighth-grade scores erode by 12th-grade.  Some of this may be due to lack of access to engaging classes and curriculum that could draw students into STEM fields.

NAEP_Science_Teens

Source: http://www.amgeninspires.com/students-on-stem/

2015 NAEP Science scores show similar trends as math and reading tests, which emphasizes the question: How do we move the needle for 12th-graders, as well as continue to improve opportunities and achievement for all students?






October 20, 2016

Let’s talk about college and career readiness

Ensuring students have the skills they need to succeed in college and the workforce is widely recognized as the ultimate goal of K-12 education. Toward this end, many states and districts have adopted and implemented college and career-readiness standards— a move that has caused some angst and outright rejection around the country.

While we know these are normal responses to change, we also know that new initiatives (especially within education) are hampered without community buy-in

It’s for this reason, the National School Boards Association and the National Association of Secondary School Principals have partnered with the Learning First Alliance’s Get it Right campaign to engage stakeholders around the importance of college and career readiness for all students.

A communications toolkit is the result of this joint project and it includes resources and materials (some of which hail from CPE’s bank) that will help educators spur dialogue, answer questions and hopefully build support for college and career readiness standards.

Find the toolkit here. Watch a sneakpeak of what you’ll find below.

Filed under: Career Readiness,CPE,standards — Tags: , , — NDillon @ 7:30 am





October 19, 2016

2015 Graduation Rates: All-time high

The National Center for Education Statistics released the 2014-2015 on-time high school graduation rates, and they look good: 83.2%. The all-time high rate continues the upward trends we have seeing for the last decade.

But, not all states look as good as others:

GradRates by State

While every student group is improving, you can see below that gaps between them are still present.

Grad Rates by Group

When you combine student poverty with state graduation rates, you see a picture that is a bit more clear.

Grad Rates

While the graph above is simply a best-fit line, it does show that states with higher poverty also tend to have lower graduation rates.  What we should be looking at are states with the same poverty rates as others, but much higher graduation rates, to identify possible lessons.  Is it a more homogeneous population?  Are more resources invested in schools?  Do teachers have better training?  Are graduation requirements easier?  There is a lot that goes into graduation rates.  So, even though we can be excited that they’re increasing for all groups, increasing opportunities for thousands of students, we still have a lot of gaps to fill.

 

Filed under: Achievement Gaps,CPE,Graduation rates,High school — Tags: , , — Chandi Wagner @ 10:37 am





Older Posts »
RSS Feed