Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

June 6, 2016

Behind every data point is a child

statistics-822231_640At CPE, we are data driven. We encourage educators, school leaders and advocates to be data-driven as well. (Indeed, we have a whole website, Data First, which is dedicated to just that. If you haven’t seen it, it’s worth your time to check out.) So while we think an over-abundance of data is a good problem to have, we often remind ourselves and others to take a step back before acting on it, and consider that every data point represents a living, breathing, complex, does-not-fit-the-mold child.

Clearly, good data can lead you to solutions for improving policy and practice in the aggregate. It can also provide insights into particular classrooms or even students. But ultimately what an individual child needs is going to be, well, quirky. We may well find out that Joey struggled with fractions this quarter even though he did well in math the quarter before. If we keep digging, we might also discover that he was absent eight days. But the data won’t tell us why. We won’t even know if the inference that Joey’s fraction trouble was due to his multiple absences is the right one. There could be a million things going on with Joey that only he and his parents can help us understand. But we need to find out before we can effectively intervene.

NPR recently ran a story on Five Doubts About Data-Driven Schools that highlights some of the risks with an absolutist approach to data. I will just address two in this space, but encourage you to read the article itself. It’s short.

One: some critics believe a hyperfocus on data can suppress rather than spark motivation to do better, particularly for low-scoring students. Publishing data that points out differences by individuals or groups can lead to what psychologists call a “stereotype threat.” According to the article, “[M]erely being reminded of one’s group identity, or that a certain test has shown differences in performance between, say, women and men, can be enough to depress outcomes on that test for the affected group.”

I have had my own qualms about the practice in some schools of displaying student test scores, whether of individual students in the classroom or reported by teacher in the school building. There can be great value in having students examine their own data, and helping them use it to take greater charge of their own learning. But there’s also a fine line between encouraging constructive self-examination and reinforcing a potentially destructive perception of failure. Before instituting such a policy or practice, principals and district leaders should think very carefully about the messages being sent versus the messages students, parents and teachers actually hear.

Two: Just because we can collect the data, should it be part of a student’s permanent record? Most would agree that universities and potential employers should have access to student transcripts, grades, test scores and other academic information when making admissions or employment decisions. But, as the article points out, we are entering an era when psychometricians will be able to measure such characteristics as grit, perseverance, teamwork, leadership and others.  How confident should we be in this data? And even if it is reliable, should we even consider such data for traits exhibited in childhood and adolescence that are arguably mutable, and therefore may no longer be accurate descriptions of the individual? I have similar concerns about a child’s disciplinary record following him or her into adulthood.

Over and over again, the availability and effective use of education data has been shown to have a tremendous impact on improving performance at the system, school and individual level. Back to Joey and fractions. Had she not looked at his data, Joey’s teacher would not have identified his struggle, and it might have remained hidden only to become worse over time. This way she is able to dig more, ask questions, find out what Joey needs, and ideally, provide extra help so he will succeed.

But we also need to guard against the overuse of data, lest we allow it to reduce all of a student’s intellect, growth, production, and character to a number and lose a picture of the child.

Filed under: Accountability,CPE,Data — Tags: , , — Patte Barth @ 1:39 pm





May 4, 2016

Let’s think about time

Editor’s Note: Breanna Higgins is a former teacher and spring intern at CPE

Let’s start to think about time and realistic timelines for how long reform and school improvement really takes. This era of accountability expects superintendents to turnaround failing schools, or even whole districts, within a couple years. Each new innovative reform or program is expected to be the next great thing- often districts implement several new programs at the same time to increase the potential for success.

Instant gratification- instant improvement. Superintendents and school and district leaders want to see test scores rise instantly and show that their reforms worked. Unfortunately, this rarely happens. Test scores sometimes rise, but then flat line again quickly. It’s not necessarily because the reform didn’t work— it’s just that we need to be patient.

We need to devote years to strong and faithful implementation. Teachers need to be trained- in more than the week before school- in how to use the new programs. Teachers also need time to figure out how to teach effectively with these new changes and it will take years for teachers to become proficient in a new system. Teachers see reforms come and go so quickly that the “this too shall pass” mentality is not just a line- it is very real. Teachers don’t feel the need to become heavily invested in a new reform or program when they know it will be changed out again in a year or two.

A district that truly commits to a reform needs to commit long term. The reform needs to be rolled out in stages and implemented carefully. Timelines and hopes for seeing success should be realistic. Teachers are the main element of any reform and if they do not believe in the program, or believe it will be around long enough for them to care, it won’t have much of an impact. By committing to long-term action, teachers have time to adjust and see changes in the classroom and they are able to commit to a program that they see the district has committed to. The district needs to be willing to take the time to ride out the ups and downs of a reform. Some experts in school reform believe it takes five years simply to fully implement a new reform and that achievement results will follow from there.

School improvement takes time. Policymakers and communities need to be patient and allow reforms to be implemented well, and slowly, to see real improvement. A new program every year only ensures that most people “on the ground” will ignore it.

Filed under: Accountability,CPE,Public education,School boards — Breanna Higgins @ 3:15 pm





April 14, 2016

What’s different about ESSA?

What’s Different about ESSA?

The Elementary and Secondary Education Act of 1965 (ESEA) created the starting point for equity-based education reforms. It established categorical aid programs for specific subgroups that were at-risk of low academic achievement. “Title I” comes from this act- it created programs to improve education for low-income students. No Child Left Behind (NCLB) was a reauthorization of ESEA which gave more power to the federal government to ensure that all students received an equitable education and that standardized testing was the vehicle to assess high-standards for schools.

In 2015, the Every Student Succeeds Act (ESSA) again reauthorized ESEA and changed much of the language and policies of NCLB. At its foundation, the law gave a lot of decision-making power back to the states. Although state’s still need to have high-standards, test their students, and intervene in low-performing schools, the state’s themselves will have the power to determine the “how”.

This table below provides the key differences between NCLB and ESSA and was compiled from several sources (listed at the bottom) which provide a great deal more detail and specifics for those interested in learning more.

 

ESSA Table

 

-Breanna Higgins

 

Sources:

http://www.ncesd.org/cms/lib4/WA01000834/Centricity/Domain/52/GeneralNCLB%20vs%20ESSA%20Comparison%20-%20Title%20I-Federl%20Programs.pdf

http://neatoday.org/2015/12/09/every-student-succeeds-act/

http://all4ed.org/essa/

http://www.ascd.org/ASCD/pdf/siteASCD/policy/ESEA_NCLB_ComparisonChart_2015.pdf

Filed under: Accountability,CPE,ESSA — Tags: , — Breanna Higgins @ 1:10 pm





February 19, 2016

When report cards collide

One surefire way for education policy groups to get press is to release a state report card. Any kind of ranking is clickbait for news outlets. Plus, with a state-of-education report card you get a bonus man-bites-dog story when the grade-giving institution is the one being graded. Consequently, organizations representing business interests from teachers’ unions to think tanks have gotten into the act at one time or another. But readers should beware. When it comes to ranking states on education, a rose is not a rose is not a rose.

Three state report cards released over the winter show how widely the grades vary, even though they are all ostensibly evaluating the same thing – public education. The American Legislative Exchange Council published its Report Card on American Education in November. Just last week, the Network for Public Education released a 50 State Report Card.  Both ALEC and NPE are advocacy organizations with clear, and contradictory, agendas. January saw the release of Education Week’s annual Quality Counts which, as the education publication of record, represents the Goldilocks in this bunch.

What, if anything, can we learn by looking at these three rankings collectively? On the one hand, there is little agreement among the organizations regarding which states are top performers: no state makes the top 10 in all three lists. Yet on the other hand, there is consensus that no state is perfect and that much more work needs to be done, since no state earned an ‘A.’

Obviously, these reports differ because they value different things. ALEC and NPE grade states on education policies that they like. ALEC, which advertises itself as supportive of “limited government, free markets and federalism,” awards states that promote choice and competition, such as allowing more charter schools, providing private school options with taxpayer support, and having few or no regulations on homeschooling. NPE emphasizes the “public” in public education and opposes privatization and so-called “corporate reforms” such as merit pay, alternative certification for teachers, and especially high-stakes testing. Policies that earned high grades by ALEC, therefore, got low grades from NPE and vice versa.

The two had one area of agreement, however, albeit by omission. The report cards say little (ALEC) or nothing (NPE) about actual performance. The result is that grades on both reports have no relationship to student learning.

To its credit, ALEC features a separate ranking on states’ NAEP scores for low-income students as their way to draw attention to student performance. However, by doing so, the authors also cast a light on how little ALEC’s preferred policies relate to achievement. For every Indiana, which earned ALEC’s top grade and produces high NAEP scores, there is a Hawaii whose low-income kids ranked 6th on NAEP, but earned an ALEC ‘D+.’  NPE isn’t any better. Despite the appearance of high-performing states like Massachusetts and Iowa in the NPE Top 10, they also awarded high-scoring Indiana an ‘F’ and Colorado a ‘D.’

In contrast to ALEC and NPE, Ed Week does not take positions on education policy. Its state report card focused on K-12 achievement, school finance, and something they call “chance for success” — demographic indicators related to student achievement including poverty, parent education and early education enrollments. With policy out of the equation, Ed Week’s grades in each domain track fairly consistently with the overall grade suggesting that the indicators identified by the authors tell us at least something about the quality of education.

So which state gets bragging rights? If you want to use one of these report cards as fodder for your own particular brand of advocacy, then by all means go with ALEC or NPE – whichever one fits your views best. But if you really want to know how well different education policies work, you’d be better off consulting the research. You can start here, here and here.

As for ranking states by their education systems? Stick with Goldilocks.






February 3, 2016

PARCC test results lower for computer-based tests

In school year 2014-2015, students took the Partnership for Assessment of Readiness for College and Careers (PARCC) exam on a pilot basis. The PARCC exam was created to be in alignment with the Common Core Standards and is among the few standardized assessment measures of how well school districts are teaching higher-level competencies.

On February 3, Education Week reported in an article that the results for students who took the computer-based version of the exam were significantly lower than the results for students who took a traditional pencil and paper version. While the article states that the PARCC organization does not have a response or clear answer on why this occurred, I will offer my own explanation based on my experience as a teacher of students who took this exam last year.

I taught high school History, and the largest discrepancy in the results between students who took the computer versus paper exam was at the high school level. This is my theory for the discrepancy. Throughout students’ academic careers we teachers teach them to “mark-up” the text. This means that as they read books, articles, poems, and primary sources etc. students should have a pen/pencil and highlighter in their hand. There are many acronyms for how students should “mark-up” their text. One is HACC- Highlight, Annotate, Circle unknown words, Comment. There are many others but the idea is the same. Students are taught to summarize each paragraph in the margins and make note of key words. This helps students to stay engaged with the reading, find main ideas, and critically think about what they are reading. It also makes it easier to go back and skim the text for the main ideas and remember what they read without re-reading.

Generally students are forced to mark-up/annotate the text in this way but, honestly, I still do this! And, I would bet that many adults do too. If you need to read a long article at work, many people print it out and read it with a pen in hand. It makes it easier to focus on what you are reading. Now imagine that someone is going to test you on that article. You will be even more anxious to read the article carefully and write notes for yourself in the margins.

The point is that students are taught to do this when reading, especially when reading passages for exams when there will be questions based on the passage. My own students had this drilled into them throughout the high school years when I knew and taught them. Sometime last year the teachers learned that our school would be giving the pilot version of the PARCC exam to our students. During a teacher professional development day we were asked to go online to the PARCC website and learn about the test and take a practice exam. I encourage you to go online and take it for yourself — this exam is hard! We were asked to analyze the questions and think about ways we could change our own in-class exams to better align with PARCC. We were told that it would soon replace our state’s standardized exam.

One of the first things we all noticed was how long the reading passages are for the ELA portion of the test. It took a long time to read through them and we all struggled to read it on a computer screen. I really wanted to have a printed version to write my notes down! It was long and detailed and I felt as though by the time I saw the questions I would have to re-read the whole passage to find the answer (or find the section where I could infer an answer). I knew the students would struggle with this and anticipated lower scores on this exam than the state test. I was thankful that their scores wouldn’t actually count this year. But what happens when this becomes a high-stakes test?

As I anticipated, the scores for students who took the computer-based exams were far lower than those who took a traditional paper test. The Illinois State Board of Education found that, across all grades, 50% of students scored proficient of the paper-based PARCC exam compared to only 32% of students who took the exam online. In Baltimore County, students who took the paper test scored almost 14 points higher than students of similar demographics who took the test on the computer.

The low scores on the test are a different story. Organizations will need to analyze the results of this major pilot test and determine its validity. Students and teachers, if it becomes mandatory, will have to adjust to better learn the standards and testing format associated with this test. The bigger story is that there are significant hardships that come with taking a computer-based test.

My main concern is the reading passages. I don’t believe teachers should abandon the “mark it up” technique to bend to computer-based testing because learning how to annotate a text is valuable throughout people’s lives. I saw the students struggle to stare at the computer screen and focus on the words. Many used their finger on the screen to follow along with what they were reading. It was clearly frustrating for them not to be able to underline and make notes like they were used to doing.

Other concerns are that this test is online. It requires access to the internet, a multitude of computers for students to test, and students and teacher who are technologically savvy. When my school gave the test, it took several days and a lot of scheduling and disruption to get all students to take the test given our limited number of computers. Certain rooms of the building have less reliable internet connection than others and some students lost connection while testing. Sometimes the system didn’t accept the student login or wouldn’t change to the next page. There were no PARCC IT professionals in the building to fix these issues. Instead, teachers who didn’t know the system any better than the students tried to help.

Not all students were ultimately able to take or finish the exam because of these issues. Thankfully their results didn’t matter for their graduation! There are also equity concerns between students who are familiar with computers and typing and those who do not have much exposure to technology. As a teacher in an urban school I can tell you that was not uncommon to see students typing essays on their phones because they didn’t have a computer.

As a whole, I’m not surprised by the discrepancy in test scores and I imagine that other teachers are not either. The Education Week article quotes the PARCC’s Chief of Assessment in saying “There is some evidence that, in part, the [score] differences we’re seeing may be explained by students’ familiarity with the computer-delivery system.” This vague statement only hits the tip of the iceberg. I encourage those analyzing the cause of the discrepancy to talk to teachers and students. Also, ask yourselves how well you would do taking an exam completely online, particularly when there are long reading passages. –Breanna Higgins

Filed under: Accountability,Assessments,Common Core,High school,Testing — Tags: , , — Breanna Higgins @ 4:27 pm





« Newer PostsOlder Posts »
RSS Feed