Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

February 23, 2016

Common Core’s happy days may be here again

Did a relationship ever sour so quickly as the Common Core and public opinion? Back in 2010 when the college- and career-ready standards were shiny and new, leaders from business and higher education as well as a certain U.S. Secretary of Education praised their rigor, coherence and attention to critical thinking. Within a year, 45 governors and D.C. had rushed to adopt them as their own – a move a majority of teachers and parents viewed favorably.

Then, implementation happened. Many teachers felt rushed to produce results. Parents couldn’t understand their child’s homework. Their anxiety fed chatter on talk radio and social media that did the incredible. It united anti-corporate progressives and anti-government tea partiers in opposition to the new standards and the assessments that go with them. States once on board with the program began to bail in face of angry constituents.

Recently, though, the mood appears to be shifting back into neutral. Presidential candidates deliver variations of the “repeal Common Core” line to applause, but the issue doesn’t seem to be gaining much traction in the race. The newly reauthorized ESEA deflates anti-Common Core messaging by explicitly forbidding the federal government from compelling or encouraging state adoption of any set of standards, including the Common Core.  After a flurry of state legislative proposals were introduced to undo the standards, only a handful were ever signed into law, and in some of those states, the replacements aren’t substantively different from the ones they tossed.

New studies related to the Common Core could prompt a wary public to give the standards a second look. In the first, a Harvard research team led by Thomas Kane surveyed a representative sample of teachers and principals in five Common Core states about implementation strategies. They were then able to match responses to student performance on the Core-aligned assessments, PARCC and Smarter Balanced.

According to their report, Teaching Higher: Educators’ perspective on Common Core implementation, three out of four teachers have “embraced the new standards” either “quite a bit” or “fully.” When asked how much of their classroom instruction changed, a similar proportion said it had by one half or more. Four in five math teachers say they have increased “emphasis on conceptual understanding” and “application of skills,” while an even higher proportion of English teachers reported assigning more writing “with use of evidence.” All are attributes emphasized in the standards.

The research team then related the survey results to students’ scores on the new assessments after controlling for demographics and prior achievement. While they did not find strategies of particular impact on English language arts, they did identify math practices that were associated with higher student scores: more professional development days; more classroom observations “with explicit feedback tied to the Common Core”; and the “inclusion of Common Core-aligned student outcomes in teacher evaluations.”

Casting light on such strategies is only worthwhile, however, if there is also evidence that the Common Core are good standards. Enter the Fordham Institute. The education think tank assembled a team of 40 experts in assessment and teaching to evaluate the quality of PARCC and Smarter Balanced. For comparison, they examined college-ready aligned ACT Aspire and MCAS, the highly regarded Massachusetts state assessment. The grades 5 and 8 test forms were analyzed against criteria developed by the Council of Chief State School Officers for evaluating “high-quality assessments” that aim to assess college- and career-readiness.

The short version.  All four tests scored highly for “depth,” that is, items that are “cognitively demanding.” PARCC and Smarter Balanced, however, edged out both ACT Aspire and MCAS in “content.” The researchers conducted an additional analysis against other assessments and found the Common Core-aligned tests also “call for greater emphasis on higher-order thinking skills than either NAEP or [the international] PISA, both of which are considered to be high-quality challenging assessments.”

Whether or not participating in national standards is a good idea is a decision that should rightfully be made by individual states. There are many legitimate political arguments for going either way, and each state will likely view it differently. But whether the Common Core standards – in full or in part – represent the expectations a state should have for all its students is an educational question that is worth considering on its own merits.

These early reports suggest that the new standards are higher and deeper than what states had before. Most teachers, although not all, have “embraced” them and are changing their instruction accordingly. We are learning anecdotally, too, that as parents see evidence of their child’s growth, they come around as supporters (see here and here).  What this means for the future is anyone’s guess. But for now it’s looking like the Common Core or something very much like them may be seeing happier days ahead. — Patte Barth

This entry first appeared on Huffington Post February 22, 2016.

 






February 3, 2016

PARCC test results lower for computer-based tests

In school year 2014-2015, students took the Partnership for Assessment of Readiness for College and Careers (PARCC) exam on a pilot basis. The PARCC exam was created to be in alignment with the Common Core Standards and is among the few standardized assessment measures of how well school districts are teaching higher-level competencies.

On February 3, Education Week reported in an article that the results for students who took the computer-based version of the exam were significantly lower than the results for students who took a traditional pencil and paper version. While the article states that the PARCC organization does not have a response or clear answer on why this occurred, I will offer my own explanation based on my experience as a teacher of students who took this exam last year.

I taught high school History, and the largest discrepancy in the results between students who took the computer versus paper exam was at the high school level. This is my theory for the discrepancy. Throughout students’ academic careers we teachers teach them to “mark-up” the text. This means that as they read books, articles, poems, and primary sources etc. students should have a pen/pencil and highlighter in their hand. There are many acronyms for how students should “mark-up” their text. One is HACC- Highlight, Annotate, Circle unknown words, Comment. There are many others but the idea is the same. Students are taught to summarize each paragraph in the margins and make note of key words. This helps students to stay engaged with the reading, find main ideas, and critically think about what they are reading. It also makes it easier to go back and skim the text for the main ideas and remember what they read without re-reading.

Generally students are forced to mark-up/annotate the text in this way but, honestly, I still do this! And, I would bet that many adults do too. If you need to read a long article at work, many people print it out and read it with a pen in hand. It makes it easier to focus on what you are reading. Now imagine that someone is going to test you on that article. You will be even more anxious to read the article carefully and write notes for yourself in the margins.

The point is that students are taught to do this when reading, especially when reading passages for exams when there will be questions based on the passage. My own students had this drilled into them throughout the high school years when I knew and taught them. Sometime last year the teachers learned that our school would be giving the pilot version of the PARCC exam to our students. During a teacher professional development day we were asked to go online to the PARCC website and learn about the test and take a practice exam. I encourage you to go online and take it for yourself — this exam is hard! We were asked to analyze the questions and think about ways we could change our own in-class exams to better align with PARCC. We were told that it would soon replace our state’s standardized exam.

One of the first things we all noticed was how long the reading passages are for the ELA portion of the test. It took a long time to read through them and we all struggled to read it on a computer screen. I really wanted to have a printed version to write my notes down! It was long and detailed and I felt as though by the time I saw the questions I would have to re-read the whole passage to find the answer (or find the section where I could infer an answer). I knew the students would struggle with this and anticipated lower scores on this exam than the state test. I was thankful that their scores wouldn’t actually count this year. But what happens when this becomes a high-stakes test?

As I anticipated, the scores for students who took the computer-based exams were far lower than those who took a traditional paper test. The Illinois State Board of Education found that, across all grades, 50% of students scored proficient of the paper-based PARCC exam compared to only 32% of students who took the exam online. In Baltimore County, students who took the paper test scored almost 14 points higher than students of similar demographics who took the test on the computer.

The low scores on the test are a different story. Organizations will need to analyze the results of this major pilot test and determine its validity. Students and teachers, if it becomes mandatory, will have to adjust to better learn the standards and testing format associated with this test. The bigger story is that there are significant hardships that come with taking a computer-based test.

My main concern is the reading passages. I don’t believe teachers should abandon the “mark it up” technique to bend to computer-based testing because learning how to annotate a text is valuable throughout people’s lives. I saw the students struggle to stare at the computer screen and focus on the words. Many used their finger on the screen to follow along with what they were reading. It was clearly frustrating for them not to be able to underline and make notes like they were used to doing.

Other concerns are that this test is online. It requires access to the internet, a multitude of computers for students to test, and students and teacher who are technologically savvy. When my school gave the test, it took several days and a lot of scheduling and disruption to get all students to take the test given our limited number of computers. Certain rooms of the building have less reliable internet connection than others and some students lost connection while testing. Sometimes the system didn’t accept the student login or wouldn’t change to the next page. There were no PARCC IT professionals in the building to fix these issues. Instead, teachers who didn’t know the system any better than the students tried to help.

Not all students were ultimately able to take or finish the exam because of these issues. Thankfully their results didn’t matter for their graduation! There are also equity concerns between students who are familiar with computers and typing and those who do not have much exposure to technology. As a teacher in an urban school I can tell you that was not uncommon to see students typing essays on their phones because they didn’t have a computer.

As a whole, I’m not surprised by the discrepancy in test scores and I imagine that other teachers are not either. The Education Week article quotes the PARCC’s Chief of Assessment in saying “There is some evidence that, in part, the [score] differences we’re seeing may be explained by students’ familiarity with the computer-delivery system.” This vague statement only hits the tip of the iceberg. I encourage those analyzing the cause of the discrepancy to talk to teachers and students. Also, ask yourselves how well you would do taking an exam completely online, particularly when there are long reading passages. –Breanna Higgins

Filed under: Accountability,Assessments,Common Core,High school,Testing — Tags: , , — Breanna Higgins @ 4:27 pm





October 15, 2015

Schoolwork worth doing

“Ok, students, it’s time to get out your crayons!”

Hearing this never fails to delight kindergarteners in the classroom. But what about in seventh grade social studies, even if colored pencils are substituted for crayons?  Outside of art class, does drawing really represent the kind of work middle-schoolers should be doing to get ready for high school?

Analysts for the Education Trust recently examined the quality of classroom assignments in a half dozen middle schools in order to document the degree to which they were aligned to the Common Core’s English language arts standards. The preliminary results were published last month in the report Checking In: Do Classroom Assignments Reflect Today’s Higher Standards?.

The Ed Trust team was able to identify assignments that were clearly up to the task. But they also found that these were a fraction of what students are being asked to do on a daily basis. According to the analysis, a surprising few assignments were “aligned with a grade-appropriate standard” – 38 percent to be exact. The 7th grade drawing assignment cited above is an example. And the picture is even worse for students in high-poverty schools (31 percent “grade-appropriate”).

The research team examined both in- and out-of-school assignments given by 92 teachers to students grades six through eight over a two week period. Common Core-ELA standards cross subject areas so assignments were collected from teachers of English, humanities, history/social studies and science. The average number submitted per teachers was 17. Altogether the analysts scored nearly 1,600 assignments on such attributes as “alignment to Common Core,” “centrality of text,” “cognitive challenge” and “motivation and engagement.”

The report authors, Sonja Brookins Santelises and Joan Dabrowski, acknowledge that they did not expect to see 100 percent alignment to the higher-level demands expressed in the standards. Indeed, there is a place in the classroom for the occasional quick check of facts or basic skills practice that will help students use these tools more confidently when applied to more challenging tasks. But Santelises and Dabrowski did hope to see more rigor than they found, as follows:

  • 16 percent of assignments required students to “use a text for citing evidence”;
  • 4 percent required higher-level thinking; in contrast, 85 percent asked for either the recall of information or the application of basic skills;
  • 2 percent met their criteria for “relevance and choice”; and
  • not surprisingly given all this, only 5 percent were scored in the high range of the Ed Trust framework.

For me, reading this report was like déjà vu all over again. In the nineties and early aughts, I worked at the Ed Trust as part of a team that helped teachers in high-poverty schools align their lessons and assignments to state standards. During that time I can’t say how often we saw the “movie poster assignment” as the culminating task following a major unit of study. This assignment asks students to create, to draw, a movie poster on the topic as opposed to writing a paper or otherwise have students show their capacity to extend their thinking about the material. Could such an assignment be given occasionally as a break from a routine of academic heavy lifting? Absolutely. But in the schools we worked in, the movie poster wasn’t the exception. Too often, assignments like it were the routine.

Today, as it was then, low-level assignments are not a teacher-led plot to keep kids illiterate. Teachers in many schools struggle to keep their students engaged while keeping up with overstuffed curricular and testing requirements. The problems are exacerbated when students are performing well below their peers. Teachers in such situations often respond by providing lessons in easy bits with the idea that they will eventually build to higher understanding – what educators call “scaffolding.” (I show an example of a scaffolded math lesson on slides 7-13 in a common core presentation you can find here.)  While the practice is sound, Santelises and Dabrowski documented an over-reliance on scaffolding which rarely led to independent learning.

Nonetheless, the fact that 5 percent of the lessons were complex and high-level is cause for optimism. These teachers clearly know what rigor looks like. In addition, because of the short two-week window, the analysts may well have missed out on major end-of-unit assignments that push students’ thinking to higher levels.

The Ed Trust team is continuing its study, which should tell us more about how typical these findings are. In the meantime, school leaders who want to know how well instruction in their schools and district align to higher standards can check out this implementation guide.






July 10, 2015

‘Proficient’ in the eye of the beholder

While we often talk about the American educational system, in truth we have 50 systems, each with the latitude to define its own academic standards. A newly published analysis  by the National Center of Education Statistics shows just how widely those expectations for student learning differ among states. Moreover, the findings suggest that most states could be aiming too low.

For the last ten years, NCES has conducted periodic statistical analyses that map student proficiency on state tests to their respective performance on NAEP. This national assessment is administered in all states and it is, by large consensus, considered the gold standard both in the richness of content and the quality of the assessment itself. As such, states where their students perform at about the same level on the state test as they do on NAEP can be considered to have high performance standards.

Some partial findings:

  • Grade 4: Only two states (New York and Wisconsin) had state proficiency standards equivalent to NAEP-proficient in both reading and math; an additional three states (Massachusetts, North Carolina and Texas) were aligned with NAEP-basic in reading and NAEP-proficient in math. Four states (Alabama, Georgia, Idaho and Maryland) had proficiency levels aligned with NAEP-below basic. A whopping 22 states were in the NAEP-below basic rate in reading.
  • Grade 8: Only New York’s proficiency levels aligned with NAEP-proficient in both reading and math, while North Carolina and Texas were within NAEP-basic in reading and NAEP-proficient in math. Five states (Alabama, Connecticut, Georgia, Idaho and Ohio) were in the below basic range in both subjects. Unlike grade 4, only three states’ grade 8 performance (DC, Indiana and Mississippi) was at the NAEP-below basic level in reading. The majority of states were within the NAEP-basic range in reading and math.

Alert readers will note, of course, that some high-performing states like Connecticut and Maryland had proficiency levels that aligned with NAEP’s lowest performance designation. The analysis is, to be sure, an imperfect comparison. Even so, the relationship between state alignment to NAEP-proficient and their relative performance is fairly consistent, as you can see in the chart featured below as well as in the full report.

Despite the study’s limitations, NCES provides important context for states to help them gauge the quality of their standards. According to the Atlantic , Peggy Carr, NCES’s acting commissioner, explained to reporters that NAEP-proficient is considered to be at a level that shows students are on track to be “college-ready.” The most recent administration showed that only 35 percent of the nation’s fourth-graders performed at proficient or above on NAEP-reading; about the same proportion of eighth-graders (36 percent) were proficient in math. Clearly, we have our work cut out for us in order to meet the goal of all graduates prepared for college and careers.

The NCES study was based on 2013 data so it’s too early to see the impact of the common core standards and aligned assessments in those states that have adopted them. Several states that opted out, however, are also committed to the college and career-ready agenda. NCES’s next iteration of this series should, therefore, give us more insight into how well we are advancing.

NAEPmap

 

Filed under: Assessments,Common Core,standards — Tags: , , — Patte Barth @ 3:42 pm





July 2, 2015

Testing, opt outs and equity

Spring heralds the return of many things – tulips, bare pavement, baseball, and for millions of public schoolkids, state tests. This year, however, the inevitable proved to be largely evitable. April tulips weren’t seen until late May. Much of the country experienced a white Easter. Major league games were snowed out. And tens of thousands of students just said “no” to being tested.

To be sure, the vast majority of students took their exams as expected. New York state has by far the largest number of test refusers. Yet an analysis by the New York Times estimates that only 165,000 New York students, or about one out of every six, opted out of one or more tests in 2015. Like New York, Colorado has experienced higher than usual opt outs but 83 percent of seniors still took their exams this year.

Despite the small numbers nationwide, the opt out movement is drawing attention to the test weariness that has been settling on many public school parents, teachers and students, even among those who don’t opt out. New common core tests seem to be adding to their anxiety. By making their frustrations visible, the test refusniks are starting to influence testing policy and its place in school accountability, most notably in Congress and proposed ESEA bills currently under consideration.

So who are these opt outers? The New York Times analysis found that the movement appears to be a mostly middle-class phenomenon. According to their calculations, poor districts in New York (Free & Reduced Price Lunch > 60%) had the fewest test refusers followed by the most wealthy (FRPL < 5%). An April 2015 poll by Siena College provides some other clues by identifying racial differences in voter attitudes. While a 55 percent majority of white voters in the empire state approved of opting out, only 44 percent of black and Latino voters did.

A 2015 survey from the California Public Policy Institute identified similar racial differences in opinions about the common core. Substantial majorities of Californian Latinos, Asians and blacks expressed confidence that the new standards will “make students more college and career ready” compared to less than half of white voters.

One probable reason for these racial and class differences is the role standards and assessments have played in educational equity over the last two decades. The 1994 re-authorization of ESEA laid the foundation for what would eventually become NCLB’s test-based accountability by calling on states to “establish a framework for comprehensive, standards-based education reform for all students.”  At that time, researchers and analysts were beginning to show that the achievement gap was not just a reflection of inequitable resources but also of unequal expectations. A 1994 study from the U.S. Department of Education’s Office of Research, for example, found that “students in high poverty schools … who received mostly A’s in English got about the same reading score [on NAEP] as did the ‘C’ and ‘D’ students in the most affluent schools.” In math, “the ‘A’ students in the high poverty schools most closely resembled the ‘D’ students in the most affluent schools.”  In 2001, NCLB would define further measures to correct these inequities by requiring state tests that would give the public a common, external measurement for gauging whether academic standards were being implemented equally between high- and low-poverty schools.

Indeed, the civil rights community has been among the most vocal supporters of standardized tests in accountability systems. Earlier this year, a coalition of 25 civil rights organizations led by the Leadership Conference on Civil and Human Rights released a statement of principles for ESEA reauthorization. Signatories included the NAACP, the National Council of La Raza, the National Congress of American Indians, and the National Disabilities Rights Network. Among other things, the principles call for retaining the annual testing requirements of NCLB. In May, twelve of these organizations issued another statement specifically criticizing the opt out movement, declaring:

[T]he anti-testing efforts that appear to be growing in states across the nation, like in Colorado and New York, would sabotage important data and rob us of the right to know how our students are faring. When parents ‘opt out’ of tests—even when out of protest for legitimate concerns—they’re not only making a choice for their own child, they’re inadvertently making a choice to undermine efforts to improve schools for every child.

The statement was not universally embraced. Notable civil rights leader Pedro Noguera along with the Advancement Project’s Browne Dianis and John Jackson of the Schott Foundation took exception to what they consider to be a “high-stakes, over-tested climate” for disadvantaged students. Yet their objections are not so much against tests themselves, but in how the information is used.

There is a growing consensus that the balance between assessment for improvement and assessment for accountability has become skewed toward high stakes – something many believe has a perverse effect on classroom practice. But like Mr. Noguera and his colleagues, many educators and experts also believe that standardized tests are not the problem, it’s the out-sized role they have assumed in everything from instruction to teacher evaluation. The next few months promise to launch many federal and state conversations about what a proper role for state tests should be. Ideally, it will serve ongoing improvement while assuring the public that all students are receiving the benefits of solid public education.

Filed under: Achievement Gaps,Assessments,Common Core,equity,Testing — Tags: , , , , , — Patte Barth @ 1:10 pm





Older Posts »
RSS Feed