Learn About: 21st Century | Charter Schools | Homework
Home / Edifier

The EDifier

May 4, 2016

Let’s think about time

Editor’s Note: Breanna Higgins is a former teacher and spring intern at CPE

Let’s start to think about time and realistic timelines for how long reform and school improvement really takes. This era of accountability expects superintendents to turnaround failing schools, or even whole districts, within a couple years. Each new innovative reform or program is expected to be the next great thing- often districts implement several new programs at the same time to increase the potential for success.

Instant gratification- instant improvement. Superintendents and school and district leaders want to see test scores rise instantly and show that their reforms worked. Unfortunately, this rarely happens. Test scores sometimes rise, but then flat line again quickly. It’s not necessarily because the reform didn’t work— it’s just that we need to be patient.

We need to devote years to strong and faithful implementation. Teachers need to be trained- in more than the week before school- in how to use the new programs. Teachers also need time to figure out how to teach effectively with these new changes and it will take years for teachers to become proficient in a new system. Teachers see reforms come and go so quickly that the “this too shall pass” mentality is not just a line- it is very real. Teachers don’t feel the need to become heavily invested in a new reform or program when they know it will be changed out again in a year or two.

A district that truly commits to a reform needs to commit long term. The reform needs to be rolled out in stages and implemented carefully. Timelines and hopes for seeing success should be realistic. Teachers are the main element of any reform and if they do not believe in the program, or believe it will be around long enough for them to care, it won’t have much of an impact. By committing to long-term action, teachers have time to adjust and see changes in the classroom and they are able to commit to a program that they see the district has committed to. The district needs to be willing to take the time to ride out the ups and downs of a reform. Some experts in school reform believe it takes five years simply to fully implement a new reform and that achievement results will follow from there.

School improvement takes time. Policymakers and communities need to be patient and allow reforms to be implemented well, and slowly, to see real improvement. A new program every year only ensures that most people “on the ground” will ignore it.

Filed under: Accountability,CPE,Public education,School boards — Breanna Higgins @ 3:15 pm

April 14, 2016

What’s different about ESSA?

What’s Different about ESSA?

The Elementary and Secondary Education Act of 1965 (ESEA) created the starting point for equity-based education reforms. It established categorical aid programs for specific subgroups that were at-risk of low academic achievement. “Title I” comes from this act- it created programs to improve education for low-income students. No Child Left Behind (NCLB) was a reauthorization of ESEA which gave more power to the federal government to ensure that all students received an equitable education and that standardized testing was the vehicle to assess high-standards for schools.

In 2015, the Every Student Succeeds Act (ESSA) again reauthorized ESEA and changed much of the language and policies of NCLB. At its foundation, the law gave a lot of decision-making power back to the states. Although state’s still need to have high-standards, test their students, and intervene in low-performing schools, the state’s themselves will have the power to determine the “how”.

This table below provides the key differences between NCLB and ESSA and was compiled from several sources (listed at the bottom) which provide a great deal more detail and specifics for those interested in learning more.


ESSA Table


-Breanna Higgins







Filed under: Accountability,CPE,ESSA — Tags: , — Breanna Higgins @ 1:10 pm

February 19, 2016

When report cards collide

One surefire way for education policy groups to get press is to release a state report card. Any kind of ranking is clickbait for news outlets. Plus, with a state-of-education report card you get a bonus man-bites-dog story when the grade-giving institution is the one being graded. Consequently, organizations representing business interests from teachers’ unions to think tanks have gotten into the act at one time or another. But readers should beware. When it comes to ranking states on education, a rose is not a rose is not a rose.

Three state report cards released over the winter show how widely the grades vary, even though they are all ostensibly evaluating the same thing – public education. The American Legislative Exchange Council published its Report Card on American Education in November. Just last week, the Network for Public Education released a 50 State Report Card.  Both ALEC and NPE are advocacy organizations with clear, and contradictory, agendas. January saw the release of Education Week’s annual Quality Counts which, as the education publication of record, represents the Goldilocks in this bunch.

What, if anything, can we learn by looking at these three rankings collectively? On the one hand, there is little agreement among the organizations regarding which states are top performers: no state makes the top 10 in all three lists. Yet on the other hand, there is consensus that no state is perfect and that much more work needs to be done, since no state earned an ‘A.’

Obviously, these reports differ because they value different things. ALEC and NPE grade states on education policies that they like. ALEC, which advertises itself as supportive of “limited government, free markets and federalism,” awards states that promote choice and competition, such as allowing more charter schools, providing private school options with taxpayer support, and having few or no regulations on homeschooling. NPE emphasizes the “public” in public education and opposes privatization and so-called “corporate reforms” such as merit pay, alternative certification for teachers, and especially high-stakes testing. Policies that earned high grades by ALEC, therefore, got low grades from NPE and vice versa.

The two had one area of agreement, however, albeit by omission. The report cards say little (ALEC) or nothing (NPE) about actual performance. The result is that grades on both reports have no relationship to student learning.

To its credit, ALEC features a separate ranking on states’ NAEP scores for low-income students as their way to draw attention to student performance. However, by doing so, the authors also cast a light on how little ALEC’s preferred policies relate to achievement. For every Indiana, which earned ALEC’s top grade and produces high NAEP scores, there is a Hawaii whose low-income kids ranked 6th on NAEP, but earned an ALEC ‘D+.’  NPE isn’t any better. Despite the appearance of high-performing states like Massachusetts and Iowa in the NPE Top 10, they also awarded high-scoring Indiana an ‘F’ and Colorado a ‘D.’

In contrast to ALEC and NPE, Ed Week does not take positions on education policy. Its state report card focused on K-12 achievement, school finance, and something they call “chance for success” — demographic indicators related to student achievement including poverty, parent education and early education enrollments. With policy out of the equation, Ed Week’s grades in each domain track fairly consistently with the overall grade suggesting that the indicators identified by the authors tell us at least something about the quality of education.

So which state gets bragging rights? If you want to use one of these report cards as fodder for your own particular brand of advocacy, then by all means go with ALEC or NPE – whichever one fits your views best. But if you really want to know how well different education policies work, you’d be better off consulting the research. You can start here, here and here.

As for ranking states by their education systems? Stick with Goldilocks.

February 3, 2016

PARCC test results lower for computer-based tests

In school year 2014-2015, students took the Partnership for Assessment of Readiness for College and Careers (PARCC) exam on a pilot basis. The PARCC exam was created to be in alignment with the Common Core Standards and is among the few standardized assessment measures of how well school districts are teaching higher-level competencies.

On February 3, Education Week reported in an article that the results for students who took the computer-based version of the exam were significantly lower than the results for students who took a traditional pencil and paper version. While the article states that the PARCC organization does not have a response or clear answer on why this occurred, I will offer my own explanation based on my experience as a teacher of students who took this exam last year.

I taught high school History, and the largest discrepancy in the results between students who took the computer versus paper exam was at the high school level. This is my theory for the discrepancy. Throughout students’ academic careers we teachers teach them to “mark-up” the text. This means that as they read books, articles, poems, and primary sources etc. students should have a pen/pencil and highlighter in their hand. There are many acronyms for how students should “mark-up” their text. One is HACC- Highlight, Annotate, Circle unknown words, Comment. There are many others but the idea is the same. Students are taught to summarize each paragraph in the margins and make note of key words. This helps students to stay engaged with the reading, find main ideas, and critically think about what they are reading. It also makes it easier to go back and skim the text for the main ideas and remember what they read without re-reading.

Generally students are forced to mark-up/annotate the text in this way but, honestly, I still do this! And, I would bet that many adults do too. If you need to read a long article at work, many people print it out and read it with a pen in hand. It makes it easier to focus on what you are reading. Now imagine that someone is going to test you on that article. You will be even more anxious to read the article carefully and write notes for yourself in the margins.

The point is that students are taught to do this when reading, especially when reading passages for exams when there will be questions based on the passage. My own students had this drilled into them throughout the high school years when I knew and taught them. Sometime last year the teachers learned that our school would be giving the pilot version of the PARCC exam to our students. During a teacher professional development day we were asked to go online to the PARCC website and learn about the test and take a practice exam. I encourage you to go online and take it for yourself — this exam is hard! We were asked to analyze the questions and think about ways we could change our own in-class exams to better align with PARCC. We were told that it would soon replace our state’s standardized exam.

One of the first things we all noticed was how long the reading passages are for the ELA portion of the test. It took a long time to read through them and we all struggled to read it on a computer screen. I really wanted to have a printed version to write my notes down! It was long and detailed and I felt as though by the time I saw the questions I would have to re-read the whole passage to find the answer (or find the section where I could infer an answer). I knew the students would struggle with this and anticipated lower scores on this exam than the state test. I was thankful that their scores wouldn’t actually count this year. But what happens when this becomes a high-stakes test?

As I anticipated, the scores for students who took the computer-based exams were far lower than those who took a traditional paper test. The Illinois State Board of Education found that, across all grades, 50% of students scored proficient of the paper-based PARCC exam compared to only 32% of students who took the exam online. In Baltimore County, students who took the paper test scored almost 14 points higher than students of similar demographics who took the test on the computer.

The low scores on the test are a different story. Organizations will need to analyze the results of this major pilot test and determine its validity. Students and teachers, if it becomes mandatory, will have to adjust to better learn the standards and testing format associated with this test. The bigger story is that there are significant hardships that come with taking a computer-based test.

My main concern is the reading passages. I don’t believe teachers should abandon the “mark it up” technique to bend to computer-based testing because learning how to annotate a text is valuable throughout people’s lives. I saw the students struggle to stare at the computer screen and focus on the words. Many used their finger on the screen to follow along with what they were reading. It was clearly frustrating for them not to be able to underline and make notes like they were used to doing.

Other concerns are that this test is online. It requires access to the internet, a multitude of computers for students to test, and students and teacher who are technologically savvy. When my school gave the test, it took several days and a lot of scheduling and disruption to get all students to take the test given our limited number of computers. Certain rooms of the building have less reliable internet connection than others and some students lost connection while testing. Sometimes the system didn’t accept the student login or wouldn’t change to the next page. There were no PARCC IT professionals in the building to fix these issues. Instead, teachers who didn’t know the system any better than the students tried to help.

Not all students were ultimately able to take or finish the exam because of these issues. Thankfully their results didn’t matter for their graduation! There are also equity concerns between students who are familiar with computers and typing and those who do not have much exposure to technology. As a teacher in an urban school I can tell you that was not uncommon to see students typing essays on their phones because they didn’t have a computer.

As a whole, I’m not surprised by the discrepancy in test scores and I imagine that other teachers are not either. The Education Week article quotes the PARCC’s Chief of Assessment in saying “There is some evidence that, in part, the [score] differences we’re seeing may be explained by students’ familiarity with the computer-delivery system.” This vague statement only hits the tip of the iceberg. I encourage those analyzing the cause of the discrepancy to talk to teachers and students. Also, ask yourselves how well you would do taking an exam completely online, particularly when there are long reading passages. –Breanna Higgins

Filed under: Accountability,Assessments,Common Core,High school,Testing — Tags: , , — Breanna Higgins @ 4:27 pm

January 20, 2016

ESSA Gives More Power to the States

The Every Student Succeeds Act, ESSA, is the newest federal legislation to improve national education systems. This act replaces the heavy-hand of NCLB and places more emphasis on states to do the heavy lifting. There was a lot of criticism of state implementation of NCLB (some of the weaknesses and frustrations around the law may have been more the fault of implementation than the law itself) and now the states will need to take on more responsibility over innovation in policy-creation, testing, and accountability along with the compliance role they have been doing for years.

The state and local education agencies will need to reflect on and improve their own staff and capacity to succeed in this important work. Education agencies have become increasingly political in recent years and the average tenure of state chiefs is only 3.2 years. This tenuous environment and rapid shifts in leadership make it more difficult for agencies to complete long-term goals and for staff to have a coherent sense of direction.

In addition to changing leadership, the recession lessened the staff numbers in most education departments, leaving less employees to monitor the same numbers of schools, students, and federal funds/programs. Despite the upturn in the economy, EdWeek reports that staff numbers have not increased and has led staff members to be overstretched and to work on programs where they have little experience.

The point of understanding these staffing problems is that they will be exacerbated as ESSA demands more of the states. States finally have the decision-making power that they have been longing for, but an important question is: do they have the capacity to follow through? We can hope that as states gain power, they will also be able to hire qualified employees who can devise policies that are best for their state. They need experts to transform their lowest performing schools and groups of students, to create or revise accountability systems for schools, create or adopt academic standards (Common Core is an option here but it not required), and update school performance measures to include a school quality characteristic. These initiatives all require experts to take the lead in creating and implementing the policies, as well as to evaluate their effectiveness.

Local education systems should be aware of coming changes and work with states and schools to bridge the gaps in implementation of new policies. The more state and local systems can cooperate and communicate, the better chance policies have of being honestly implemented and becoming a success. –Breanna Higgins

Filed under: Accountability,CPE,ESSA,Public education — Breanna Higgins @ 10:47 am

« Newer Posts
RSS Feed