Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

August 19, 2016

Too many tests in your district? You may need Assessment 101

2016-08-18_16-45-43Across the country, educators, parents, and students are expressing concerns about the amount of tests being administered in schools. To address this, we partnered with the education non-profit, Achieve, to support school board members and their districts in moving toward more coherent and streamlined assessment systems.

Recognizing the critical role school boards play in influencing systemic changes at the local level, CPE co-authored a training module for school boards based on Achieve’s student assessment inventory tool.

While the suite of resources is currently available for free here and here, CPE will continue its involvement by facilitating in-person and online training of the Assessment 101 process to participating school districts in three partner states, with the hope that these will lead to measurable reductions in testing there and in other districts who learn from and adopt this approach.






April 28, 2016

12th graders’ math scores drop, reading flatlines

And just when we had allowed ourselves to get giddy over record-shattering high school graduation rates.

NAEP, also known as the Nation’s Report card, released the results of its 2015 assessment of high school seniors’ math and reading proficiency. Like their 4th and 8th grade schoolmates, whose 2015 scores were published last fall, the nation’s 12th-graders either made no progress or dropped a few points, especially in mathematics. Worse, scores for the lowest performers fell the most in both subjects.

Let’s start with reading. The overall score was 1 point lower on the NAEP scale from two years ago, which is not a statistically significant change. However, 12th graders are performing 5 points lower compared to their peers in 1992, the first year the main-NAEP reading assessment was administered.

There was no noticeable change since 2013 in the scores of any racial/ethnic group, or in the achievement gaps between them.

Indeed, the biggest change was at the bottom. In just the last two years, the proportion of students who did not even read at the basic level grew, from 25 to 28 percent.  What this means in more tangible terms is that this group of soon-to-be-graduates cannot recognize the main purpose of expository text; cannot recognize the main purpose of an argument; and cannot explain a character’s action from a story description.

The math picture isn’t any rosier. The overall math score fell a significant 3 points on the NAEP scale. While this is still 2 points higher than in 2005 – the first administration of the test’s new math framework – it does represent a reversal after years of steady gains. As with reading, the math scores were relatively flat for every racial/ethnic group compared to 2013. One happy exception: scores for English language learners rose by 4 points.

Math also saw an increase of the wrong kind. A whopping 38 percent of high school seniors did not perform at the basic level in 2015, an increase of 3 points over 2013. This is troubling on its own merits. It is truly baffling when considering that 90 percent of seniors reported having taken Algebra II or a higher math course in high school.  We should see this group of low performers shrinking, not growing larger.

Of high interest to education policymakers and parents is the degree to which 12th graders are prepared for college work. Beginning in 2008, the National Assessment Governing Board, which oversees NAEP, commissioned several studies linking NAEP performance levels to college readiness. Based on the analysis, just slightly more than a third of seniors in 2015 scored at a level showing they had the knowledge and skills needed to succeed in freshmen courses. But ready or not, two-thirds of them will be bound for two- and four-year colleges the October following graduation.

Why is this happening? Many advocates have been quick to point to policies like Common Core, too much testing, not enough testing, or whatever other bee sticks in their bonnets. But as I have written elsewhere, there is not enough information at this point to lay the blame on any one of these, although they surely warrant watching. Likewise, some observers have noted the increase in childhood poverty, which also deserves attention.

I think another explanation might be found in one of our great successes. High school graduation rates have exploded in just the last 10 years. In 2013, 81 percent of all high school students graduated within four years. We know from research that failing grades are high risk factors for students. Up until recently, these low performers would have dropped out before showing up in the NAEP data as seniors. The fact that they are still in school is a good thing, but it may also be dragging 12th grade scores down.

The truth is, it’s too soon for us to know for sure why this happened. But there are enough questions that schools should be examining to get us back on the right track.

  • Do the high-level courses students are taking in larger numbers actually represent high-level content?
  • Do schools have enough counselors and other trained professionals to not just make sure students stay in school, but have the support they need to perform academically?
  • Are teachers also supported as they implement higher standards in their classrooms?
  • Finally, are federal, state and local policymakers providing the resources high schools need to assure every student graduates ready to succeed in college, careers and life?
Filed under: Assessments,CPE,High school,NAEP,Reading,Testing — Tags: , — Patte Barth @ 10:52 am





February 23, 2016

Common Core’s happy days may be here again

Did a relationship ever sour so quickly as the Common Core and public opinion? Back in 2010 when the college- and career-ready standards were shiny and new, leaders from business and higher education as well as a certain U.S. Secretary of Education praised their rigor, coherence and attention to critical thinking. Within a year, 45 governors and D.C. had rushed to adopt them as their own – a move a majority of teachers and parents viewed favorably.

Then, implementation happened. Many teachers felt rushed to produce results. Parents couldn’t understand their child’s homework. Their anxiety fed chatter on talk radio and social media that did the incredible. It united anti-corporate progressives and anti-government tea partiers in opposition to the new standards and the assessments that go with them. States once on board with the program began to bail in face of angry constituents.

Recently, though, the mood appears to be shifting back into neutral. Presidential candidates deliver variations of the “repeal Common Core” line to applause, but the issue doesn’t seem to be gaining much traction in the race. The newly reauthorized ESEA deflates anti-Common Core messaging by explicitly forbidding the federal government from compelling or encouraging state adoption of any set of standards, including the Common Core.  After a flurry of state legislative proposals were introduced to undo the standards, only a handful were ever signed into law, and in some of those states, the replacements aren’t substantively different from the ones they tossed.

New studies related to the Common Core could prompt a wary public to give the standards a second look. In the first, a Harvard research team led by Thomas Kane surveyed a representative sample of teachers and principals in five Common Core states about implementation strategies. They were then able to match responses to student performance on the Core-aligned assessments, PARCC and Smarter Balanced.

According to their report, Teaching Higher: Educators’ perspective on Common Core implementation, three out of four teachers have “embraced the new standards” either “quite a bit” or “fully.” When asked how much of their classroom instruction changed, a similar proportion said it had by one half or more. Four in five math teachers say they have increased “emphasis on conceptual understanding” and “application of skills,” while an even higher proportion of English teachers reported assigning more writing “with use of evidence.” All are attributes emphasized in the standards.

The research team then related the survey results to students’ scores on the new assessments after controlling for demographics and prior achievement. While they did not find strategies of particular impact on English language arts, they did identify math practices that were associated with higher student scores: more professional development days; more classroom observations “with explicit feedback tied to the Common Core”; and the “inclusion of Common Core-aligned student outcomes in teacher evaluations.”

Casting light on such strategies is only worthwhile, however, if there is also evidence that the Common Core are good standards. Enter the Fordham Institute. The education think tank assembled a team of 40 experts in assessment and teaching to evaluate the quality of PARCC and Smarter Balanced. For comparison, they examined college-ready aligned ACT Aspire and MCAS, the highly regarded Massachusetts state assessment. The grades 5 and 8 test forms were analyzed against criteria developed by the Council of Chief State School Officers for evaluating “high-quality assessments” that aim to assess college- and career-readiness.

The short version.  All four tests scored highly for “depth,” that is, items that are “cognitively demanding.” PARCC and Smarter Balanced, however, edged out both ACT Aspire and MCAS in “content.” The researchers conducted an additional analysis against other assessments and found the Common Core-aligned tests also “call for greater emphasis on higher-order thinking skills than either NAEP or [the international] PISA, both of which are considered to be high-quality challenging assessments.”

Whether or not participating in national standards is a good idea is a decision that should rightfully be made by individual states. There are many legitimate political arguments for going either way, and each state will likely view it differently. But whether the Common Core standards – in full or in part – represent the expectations a state should have for all its students is an educational question that is worth considering on its own merits.

These early reports suggest that the new standards are higher and deeper than what states had before. Most teachers, although not all, have “embraced” them and are changing their instruction accordingly. We are learning anecdotally, too, that as parents see evidence of their child’s growth, they come around as supporters (see here and here).  What this means for the future is anyone’s guess. But for now it’s looking like the Common Core or something very much like them may be seeing happier days ahead. — Patte Barth

This entry first appeared on Huffington Post February 22, 2016.

 






February 3, 2016

PARCC test results lower for computer-based tests

In school year 2014-2015, students took the Partnership for Assessment of Readiness for College and Careers (PARCC) exam on a pilot basis. The PARCC exam was created to be in alignment with the Common Core Standards and is among the few standardized assessment measures of how well school districts are teaching higher-level competencies.

On February 3, Education Week reported in an article that the results for students who took the computer-based version of the exam were significantly lower than the results for students who took a traditional pencil and paper version. While the article states that the PARCC organization does not have a response or clear answer on why this occurred, I will offer my own explanation based on my experience as a teacher of students who took this exam last year.

I taught high school History, and the largest discrepancy in the results between students who took the computer versus paper exam was at the high school level. This is my theory for the discrepancy. Throughout students’ academic careers we teachers teach them to “mark-up” the text. This means that as they read books, articles, poems, and primary sources etc. students should have a pen/pencil and highlighter in their hand. There are many acronyms for how students should “mark-up” their text. One is HACC- Highlight, Annotate, Circle unknown words, Comment. There are many others but the idea is the same. Students are taught to summarize each paragraph in the margins and make note of key words. This helps students to stay engaged with the reading, find main ideas, and critically think about what they are reading. It also makes it easier to go back and skim the text for the main ideas and remember what they read without re-reading.

Generally students are forced to mark-up/annotate the text in this way but, honestly, I still do this! And, I would bet that many adults do too. If you need to read a long article at work, many people print it out and read it with a pen in hand. It makes it easier to focus on what you are reading. Now imagine that someone is going to test you on that article. You will be even more anxious to read the article carefully and write notes for yourself in the margins.

The point is that students are taught to do this when reading, especially when reading passages for exams when there will be questions based on the passage. My own students had this drilled into them throughout the high school years when I knew and taught them. Sometime last year the teachers learned that our school would be giving the pilot version of the PARCC exam to our students. During a teacher professional development day we were asked to go online to the PARCC website and learn about the test and take a practice exam. I encourage you to go online and take it for yourself — this exam is hard! We were asked to analyze the questions and think about ways we could change our own in-class exams to better align with PARCC. We were told that it would soon replace our state’s standardized exam.

One of the first things we all noticed was how long the reading passages are for the ELA portion of the test. It took a long time to read through them and we all struggled to read it on a computer screen. I really wanted to have a printed version to write my notes down! It was long and detailed and I felt as though by the time I saw the questions I would have to re-read the whole passage to find the answer (or find the section where I could infer an answer). I knew the students would struggle with this and anticipated lower scores on this exam than the state test. I was thankful that their scores wouldn’t actually count this year. But what happens when this becomes a high-stakes test?

As I anticipated, the scores for students who took the computer-based exams were far lower than those who took a traditional paper test. The Illinois State Board of Education found that, across all grades, 50% of students scored proficient of the paper-based PARCC exam compared to only 32% of students who took the exam online. In Baltimore County, students who took the paper test scored almost 14 points higher than students of similar demographics who took the test on the computer.

The low scores on the test are a different story. Organizations will need to analyze the results of this major pilot test and determine its validity. Students and teachers, if it becomes mandatory, will have to adjust to better learn the standards and testing format associated with this test. The bigger story is that there are significant hardships that come with taking a computer-based test.

My main concern is the reading passages. I don’t believe teachers should abandon the “mark it up” technique to bend to computer-based testing because learning how to annotate a text is valuable throughout people’s lives. I saw the students struggle to stare at the computer screen and focus on the words. Many used their finger on the screen to follow along with what they were reading. It was clearly frustrating for them not to be able to underline and make notes like they were used to doing.

Other concerns are that this test is online. It requires access to the internet, a multitude of computers for students to test, and students and teacher who are technologically savvy. When my school gave the test, it took several days and a lot of scheduling and disruption to get all students to take the test given our limited number of computers. Certain rooms of the building have less reliable internet connection than others and some students lost connection while testing. Sometimes the system didn’t accept the student login or wouldn’t change to the next page. There were no PARCC IT professionals in the building to fix these issues. Instead, teachers who didn’t know the system any better than the students tried to help.

Not all students were ultimately able to take or finish the exam because of these issues. Thankfully their results didn’t matter for their graduation! There are also equity concerns between students who are familiar with computers and typing and those who do not have much exposure to technology. As a teacher in an urban school I can tell you that was not uncommon to see students typing essays on their phones because they didn’t have a computer.

As a whole, I’m not surprised by the discrepancy in test scores and I imagine that other teachers are not either. The Education Week article quotes the PARCC’s Chief of Assessment in saying “There is some evidence that, in part, the [score] differences we’re seeing may be explained by students’ familiarity with the computer-delivery system.” This vague statement only hits the tip of the iceberg. I encourage those analyzing the cause of the discrepancy to talk to teachers and students. Also, ask yourselves how well you would do taking an exam completely online, particularly when there are long reading passages. –Breanna Higgins

Filed under: Accountability,Assessments,Common Core,High school,Testing — Tags: , , — Breanna Higgins @ 4:27 pm





October 27, 2015

Fewer, better tests

TestingParents have been concerned about the amount of testing their children have been subjected to in recent years. To the point where some are choosing to opt their children out of certain standardized tests. Yet, a number of educators, policymakers and education organizations have expressed the need for such tests to identify those students whose needs are not being fully met—particularly poor, minority and other traditionally disadvantaged students. Unfortunately, it has been unclear how much testing is actually taking place in our nation’s schools.

But yesterday, a report from the Council of Great City Schools (CGCS) provided the most comprehensive examination of testing to date that shed an important light on the quantity and quality of testing students are exposed to. Among the findings the report found:

  • The average eighth-grader spends 25.3 hours per year taking mandated assessments which accounts for 4.22 days or 2.34 percent of total instructional time.
    • Only 8.9 hours of this testing is due to NCLB mandated assessments.
    • Formative assessments are most likely to be given three times a year and account for 10.8 hours of testing for eighth-graders 
  • There is no correlation between the amount of mandated testing and the performance on the National Assessment for Education Progress (NAEP).
  • Urban school districts have more tests designed for diagnostic purposes than other uses.
  • Opt-out rates in the 66 school districts that participated in the study were typically less than 1 percent.
  • 78 percent of parents surveyed agreed or strongly agreed with the statement “accountability for how well my child is educated is important, and it begins with accurate measurement of what he/she is learning in school.”
    • Yet, fewer agreed when the word ‘test’ appears.
  • Parents support ‘better’ tests but are not necessarily as supportive of ‘harder’ or ‘more rigorous’ tests.

These are much needed findings in the debate about testing, which has been dominated by anecdotal accounts and theoretical arguments. CGCS’s report has provided much needed facts to inform policymakers on time spent on testing, as well as, the quality and usefulness of the tests. In fact, these findings led President Obama to propose the amount of time students spend on mandatory tests be limited to 2 percent of instructional time.

While limiting the time students spend taking tests is a good thing, the report highlights the fact that over-testing is not necessarily a quantity problem but a quality problem. For example, the report found that many of the tests were not aligned to each other nor aligned to college- and career-ready standards. Meaning, many students were administered unnecessary and redundant tests that provided little, if any, information to improve instruction. Moreover, results for many tests, including some formative assessments, were not available for months after they were taken, thereby failing to provide teachers information in-time to adjust their instruction. So, the information for many tests are neither timely nor useful.

For testing to drive quality instruction, testing systems must be aligned to college- and career-ready standards and provide usable and timely information.  Doing so does not necessarily lead to less testing time but it does lead to a more efficient testing system. While there is plenty of blame to go around for the lack of a coherent testing system, district leaders play a lead role in ensuring that each and every test is worth taking. Tools such as Achieve’s Student Assessment Inventory for School Districts can inform district leaders on how much testing is actually taking place in their classrooms and why. With such information in-hand they can make more informed decisions on which tests to continue using and which should be eliminated, as well as, if there is a need for better tests that provide a more accurate measure of what students are expected to learn. By doing so, it will create a more coherent testing system that consists of fewer and better test that will drive quality instruction that will in-turn improve student outcomes. – Jim Hull






« Newer PostsOlder Posts »
RSS Feed