Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

September 28, 2016

How do we measure the immeasurable— and should we?

We address what we assess. I’ve never cared so much about how far I walked until I bought a Fitbit and saw that my friends apparently walk 15 miles a day.  The same is true of schools.

Under No Child Left Behind (NCLB), we began assessing our students’ math, reading, and science abilities, and test scores improved.  While some of that growth may have been due to teachers teaching to the test or students adapting to standardized assessments, we should still acknowledge that having stronger data about achievement gaps has helped us build the argument for greater equity in education.

The Every Student Succeeds Act (ESSA) adds a new, non-academic factor to school accountability in response to the over-emphasis on tested subjects that many schools experienced under NCLB.  States have to determine what their accountability plan will include, and policy wonks are chiming in with research and cautionary tales.  It seems that we can all agree that the non-academic factor should be equitable (not favoring particular student groups), mutable (able to be changed), measurable (we have to be able to put some sort of ranking or number on it), and important to student growth and learning (or else, who cares?).  So far, I haven’t heard any consensus come out of the field on what this could look like.

SEL

The reality is that states may even want to consider testing out several different variables to see what the data tells them.  The non-academic variable could be minimally weighted until states are sure that their data is reliable, both ensuring that schools aren’t penalized for faulty data and that schools don’t try to game the new system.  States may also choose to use multiple indicators to ensure that pressure isn’t exerted on one lone factor.  States also have to keep in mind that children develop at different ages.  While chronic absenteeism is a problem for students of all ages, first-graders may differ in their abilities to self-regulate their emotions, based on gender and age.

A group of CORE districts in California have been testing a “dashboard” of metrics for several years, and are offering their strategy to the entire state, as documented by Stanford’s Learning Policy Institute.  Forty percent of a school’s rating is based on social and emotional learning indicators, including measures of social-emotional skills; suspension/expulsion rates; chronic absenteeism; culture/climate surveys from students, staff, and parents; and English learner re-designation rates.  The other 60% is based on academic performance and growth.

The reality is that our students need more than just math and reading.  They need to learn how to interact with others who are different from themselves.  They need to be able to creatively problem solve.  They need to think critically about the world around them.  Good teachers have been teaching their students these skills for decades; now we just have to make sure that all students have these enriching opportunities.

Filed under: Accountability,CPE,ESSA — Tags: — Chandi Wagner @ 8:00 am





June 6, 2016

Behind every data point is a child

statistics-822231_640At CPE, we are data driven. We encourage educators, school leaders and advocates to be data-driven as well. (Indeed, we have a whole website, Data First, which is dedicated to just that. If you haven’t seen it, it’s worth your time to check out.) So while we think an over-abundance of data is a good problem to have, we often remind ourselves and others to take a step back before acting on it, and consider that every data point represents a living, breathing, complex, does-not-fit-the-mold child.

Clearly, good data can lead you to solutions for improving policy and practice in the aggregate. It can also provide insights into particular classrooms or even students. But ultimately what an individual child needs is going to be, well, quirky. We may well find out that Joey struggled with fractions this quarter even though he did well in math the quarter before. If we keep digging, we might also discover that he was absent eight days. But the data won’t tell us why. We won’t even know if the inference that Joey’s fraction trouble was due to his multiple absences is the right one. There could be a million things going on with Joey that only he and his parents can help us understand. But we need to find out before we can effectively intervene.

NPR recently ran a story on Five Doubts About Data-Driven Schools that highlights some of the risks with an absolutist approach to data. I will just address two in this space, but encourage you to read the article itself. It’s short.

One: some critics believe a hyperfocus on data can suppress rather than spark motivation to do better, particularly for low-scoring students. Publishing data that points out differences by individuals or groups can lead to what psychologists call a “stereotype threat.” According to the article, “[M]erely being reminded of one’s group identity, or that a certain test has shown differences in performance between, say, women and men, can be enough to depress outcomes on that test for the affected group.”

I have had my own qualms about the practice in some schools of displaying student test scores, whether of individual students in the classroom or reported by teacher in the school building. There can be great value in having students examine their own data, and helping them use it to take greater charge of their own learning. But there’s also a fine line between encouraging constructive self-examination and reinforcing a potentially destructive perception of failure. Before instituting such a policy or practice, principals and district leaders should think very carefully about the messages being sent versus the messages students, parents and teachers actually hear.

Two: Just because we can collect the data, should it be part of a student’s permanent record? Most would agree that universities and potential employers should have access to student transcripts, grades, test scores and other academic information when making admissions or employment decisions. But, as the article points out, we are entering an era when psychometricians will be able to measure such characteristics as grit, perseverance, teamwork, leadership and others.  How confident should we be in this data? And even if it is reliable, should we even consider such data for traits exhibited in childhood and adolescence that are arguably mutable, and therefore may no longer be accurate descriptions of the individual? I have similar concerns about a child’s disciplinary record following him or her into adulthood.

Over and over again, the availability and effective use of education data has been shown to have a tremendous impact on improving performance at the system, school and individual level. Back to Joey and fractions. Had she not looked at his data, Joey’s teacher would not have identified his struggle, and it might have remained hidden only to become worse over time. This way she is able to dig more, ask questions, find out what Joey needs, and ideally, provide extra help so he will succeed.

But we also need to guard against the overuse of data, lest we allow it to reduce all of a student’s intellect, growth, production, and character to a number and lose a picture of the child.

Filed under: Accountability,CPE,Data — Tags: , , — Patte Barth @ 1:39 pm





May 4, 2016

Let’s think about time

Editor’s Note: Breanna Higgins is a former teacher and spring intern at CPE

Let’s start to think about time and realistic timelines for how long reform and school improvement really takes. This era of accountability expects superintendents to turnaround failing schools, or even whole districts, within a couple years. Each new innovative reform or program is expected to be the next great thing- often districts implement several new programs at the same time to increase the potential for success.

Instant gratification- instant improvement. Superintendents and school and district leaders want to see test scores rise instantly and show that their reforms worked. Unfortunately, this rarely happens. Test scores sometimes rise, but then flat line again quickly. It’s not necessarily because the reform didn’t work— it’s just that we need to be patient.

We need to devote years to strong and faithful implementation. Teachers need to be trained- in more than the week before school- in how to use the new programs. Teachers also need time to figure out how to teach effectively with these new changes and it will take years for teachers to become proficient in a new system. Teachers see reforms come and go so quickly that the “this too shall pass” mentality is not just a line- it is very real. Teachers don’t feel the need to become heavily invested in a new reform or program when they know it will be changed out again in a year or two.

A district that truly commits to a reform needs to commit long term. The reform needs to be rolled out in stages and implemented carefully. Timelines and hopes for seeing success should be realistic. Teachers are the main element of any reform and if they do not believe in the program, or believe it will be around long enough for them to care, it won’t have much of an impact. By committing to long-term action, teachers have time to adjust and see changes in the classroom and they are able to commit to a program that they see the district has committed to. The district needs to be willing to take the time to ride out the ups and downs of a reform. Some experts in school reform believe it takes five years simply to fully implement a new reform and that achievement results will follow from there.

School improvement takes time. Policymakers and communities need to be patient and allow reforms to be implemented well, and slowly, to see real improvement. A new program every year only ensures that most people “on the ground” will ignore it.

Filed under: Accountability,CPE,Public education,School boards — Breanna Higgins @ 3:15 pm





April 14, 2016

What’s different about ESSA?

What’s Different about ESSA?

The Elementary and Secondary Education Act of 1965 (ESEA) created the starting point for equity-based education reforms. It established categorical aid programs for specific subgroups that were at-risk of low academic achievement. “Title I” comes from this act- it created programs to improve education for low-income students. No Child Left Behind (NCLB) was a reauthorization of ESEA which gave more power to the federal government to ensure that all students received an equitable education and that standardized testing was the vehicle to assess high-standards for schools.

In 2015, the Every Student Succeeds Act (ESSA) again reauthorized ESEA and changed much of the language and policies of NCLB. At its foundation, the law gave a lot of decision-making power back to the states. Although state’s still need to have high-standards, test their students, and intervene in low-performing schools, the state’s themselves will have the power to determine the “how”.

This table below provides the key differences between NCLB and ESSA and was compiled from several sources (listed at the bottom) which provide a great deal more detail and specifics for those interested in learning more.

 

ESSA Table

 

-Breanna Higgins

 

Sources:

http://www.ncesd.org/cms/lib4/WA01000834/Centricity/Domain/52/GeneralNCLB%20vs%20ESSA%20Comparison%20-%20Title%20I-Federl%20Programs.pdf

http://neatoday.org/2015/12/09/every-student-succeeds-act/

http://all4ed.org/essa/

http://www.ascd.org/ASCD/pdf/siteASCD/policy/ESEA_NCLB_ComparisonChart_2015.pdf

Filed under: Accountability,CPE,ESSA — Tags: , — Breanna Higgins @ 1:10 pm





February 19, 2016

When report cards collide

One surefire way for education policy groups to get press is to release a state report card. Any kind of ranking is clickbait for news outlets. Plus, with a state-of-education report card you get a bonus man-bites-dog story when the grade-giving institution is the one being graded. Consequently, organizations representing business interests from teachers’ unions to think tanks have gotten into the act at one time or another. But readers should beware. When it comes to ranking states on education, a rose is not a rose is not a rose.

Three state report cards released over the winter show how widely the grades vary, even though they are all ostensibly evaluating the same thing – public education. The American Legislative Exchange Council published its Report Card on American Education in November. Just last week, the Network for Public Education released a 50 State Report Card.  Both ALEC and NPE are advocacy organizations with clear, and contradictory, agendas. January saw the release of Education Week’s annual Quality Counts which, as the education publication of record, represents the Goldilocks in this bunch.

What, if anything, can we learn by looking at these three rankings collectively? On the one hand, there is little agreement among the organizations regarding which states are top performers: no state makes the top 10 in all three lists. Yet on the other hand, there is consensus that no state is perfect and that much more work needs to be done, since no state earned an ‘A.’

Obviously, these reports differ because they value different things. ALEC and NPE grade states on education policies that they like. ALEC, which advertises itself as supportive of “limited government, free markets and federalism,” awards states that promote choice and competition, such as allowing more charter schools, providing private school options with taxpayer support, and having few or no regulations on homeschooling. NPE emphasizes the “public” in public education and opposes privatization and so-called “corporate reforms” such as merit pay, alternative certification for teachers, and especially high-stakes testing. Policies that earned high grades by ALEC, therefore, got low grades from NPE and vice versa.

The two had one area of agreement, however, albeit by omission. The report cards say little (ALEC) or nothing (NPE) about actual performance. The result is that grades on both reports have no relationship to student learning.

To its credit, ALEC features a separate ranking on states’ NAEP scores for low-income students as their way to draw attention to student performance. However, by doing so, the authors also cast a light on how little ALEC’s preferred policies relate to achievement. For every Indiana, which earned ALEC’s top grade and produces high NAEP scores, there is a Hawaii whose low-income kids ranked 6th on NAEP, but earned an ALEC ‘D+.’  NPE isn’t any better. Despite the appearance of high-performing states like Massachusetts and Iowa in the NPE Top 10, they also awarded high-scoring Indiana an ‘F’ and Colorado a ‘D.’

In contrast to ALEC and NPE, Ed Week does not take positions on education policy. Its state report card focused on K-12 achievement, school finance, and something they call “chance for success” — demographic indicators related to student achievement including poverty, parent education and early education enrollments. With policy out of the equation, Ed Week’s grades in each domain track fairly consistently with the overall grade suggesting that the indicators identified by the authors tell us at least something about the quality of education.

So which state gets bragging rights? If you want to use one of these report cards as fodder for your own particular brand of advocacy, then by all means go with ALEC or NPE – whichever one fits your views best. But if you really want to know how well different education policies work, you’d be better off consulting the research. You can start here, here and here.

As for ranking states by their education systems? Stick with Goldilocks.






Older Posts »
RSS Feed