Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

December 12, 2016

U.S. Students have Strong Showing on International Math Assessment

We recently released an analysis of PISA scores, which showed disparities in achievement across student groups and mostly stagnant scores.  However, the U.S. had a better showing on another international benchmark, the TIMSS (Trends in International Mathematics and Science Study).  TIMSS is different from PISA in that it assesses more classroom-based content, whereas PISA is more of an assessment of how students can apply skills learned in the classroom to real-world problems.  TIMSS assesses 4th and 8th graders, while PISA assesses 15 year olds, regardless of grade.

In 2015, TIMSS assessed 49 countries in 4th grade math, 47 countries in 4th grade science, and 39 countries in 8th grade math and science.

Students were assessed in math and science, but today we’ll just take a look at the math scores.

4th Grade Math

U.S. 4th graders scored 14th out of 49 countries, with performance that was statistically lower than only 10 countries and similar to eight countries, including Finland.  They also scored 10th highest in the number of students who scored at the advanced level (levels are low, intermediate, high, and advanced).  The percentage of students reaching high or advanced levels have increased steadily since the test was first administered in 1995.  Students showed greater strength in numerical functions, but had deficits in geometric shapes and measures.  They also scored higher in knowledge-based questions than items based on the application of knowledge to a problem and reasoning.TIMMS4th

8th Grade Math

U.S. 8th graders were 10th out of 39 countries, with seven countries having statistically higher scores and nine countries having similar scores.  Eighth graders also had the 10th highest number of students who scored at the advanced level, which has been steadily increasing since 1995.  Students were significantly stronger in Algebra than they were in 2007 or 2011.  Similar to 4th graders, 8th grade students were stronger at knowledge-based questions than application or reasoning questions, despite showing improvement in all three categories since 2007.

TIMMS8th

Demographic Factors

Schools with fewer students from affluent families and more students from disadvantaged families performed at lower levels than more affluent schools, showing that the U.S. still has much work to do to achieve academic equity.  Note that demographic data is reported by principals.

TIMMS3

Schools with more native English speakers perform better than schools with greater numbers of students learning English in both 4th and 8th grades.  Schools with teacher-reported lack of resources and problems with school conditions also fared worse.  Students who felt that they fit in, or belonged, at school had higher achievement.

Other Contributing Factors

The U.S. is in the bottom half of countries on measures of teacher satisfaction.  Higher levels of teacher satisfaction in their schools is mildly correlated with higher student performance.  Teachers who reported having greater challenges, such as large classes or administrative tasks, actually had higher student achievement than those who reported few challenges.

International Data

Gender gaps still tend to favor boys across the globe, though in some countries girls outperform boys.  Interestingly, 8th grade girls in 21 countries outperformed boys in Algebra, though boys outperformed girls in number-based problems in 17 countries.

TIMMS1Source: http://timss2015.org/timss-2015/mathematics/achievement-in-content-and-cognitive-domains/

As debate continues about early childhood education in the U.S., the data from other countries is quite convincing that students who have formal education before entering the K-12 system outperform those who do not.  This data does not include the U.S.

TIMMS2

Source: http://timss2015.org/timss-2015/mathematics/home-environment-support/

We still have work to do, but TIMSS shows us that improvement has been slow and steady for U.S. students.

Filed under: Assessments,CPE,International Comparisons,TIMSS — Tags: , , — Chandi Wagner @ 4:07 pm





November 2, 2016

Thoughts on nuance and variance

As we approach the 2016 general election, I’ve heard public officials, family, and friends make very clear statements regarding which side of the aisle they support.  Yet, I find it hard to believe that the average American falls in line 100% with either political party, or supports every word and tenet of a particular public policy.  We are nuanced people.  Very few issues are as black-and-white as we’d like them to be.  Here’s a guide for things to consider when considering your stance on a particular issue, candidate, or political party, put in the context of educational issues.

  1. Most issues have an “it depends” clause.

With the onslaught of information available today, it makes sense that we want answers that are black-and-white.  The reality, though, is that there’s gray area for most policies and practices.  We also have to balance our ideological values with evidence.  Charter school proponents may believe in free-market values and choice to improve public schools through vouchers and charter schools, but I haven’t seen widespread evidence that choice in and of itself actually improves academic achievement or long-term outcomes in significant ways.  Yes, there are individual students who have benefited, but there are also individual students who have lost out.  Charter school opponents claim that taking away publicly-elected oversight through school boards is detrimental to the public’s ability to provide free and quality education to all.  Yet, the reality is that some public schools have dismal records, and charter or private schools have sometimes had success with the same students.  We have to acknowledge that we all want good things for our kids, and then use the evidence to figure out what that looks like without demonizing the other side.

  1. Most policies rely heavily on the quality of their implementation to be successful.

Common Core seems to be a prime example of this.  Two-thirds of Americans are in support of some sort of common standards across the country.  Yet, barely half of Americans are in support of Common Core.  Support for both questions have dwindled significantly from about 90% of public support in 2012.  Even presidential candidate Hillary Clinton has called the roll-out of Common Core “disastrous,” despite supporting them overall.

CommonCore

Source: http://educationnext.org/ten-year-trends-in-public-opinion-from-ednext-poll-2016-survey/

They were implemented quickly in many states, often without the curriculum materials or professional development to help teachers succeed in teaching the new standards.  While support for Common Core seems to be leveling off with teachers, who are most familiar with them, several states have repealed or are considering repealing the Common Core.  The new state standards that have been written in South Carolina and Indiana are extremely similar to the Common Core, which means that it may not be the concept or content that people disagree with so much as how they were implemented and the ensuing political backlash.

 

  1. Statistics usually tell us about an average (the typical student) but variance is also important.

Charter schools are a prime example of this.  On average, they have similar student achievement outcomes as traditional public schools.  But, there are schools that outperform their counterparts and schools that woefully underperform.  We have to think about those schools, too.

This is also clear in school segregation.  The average black student in the U.S. attends a school that is 49% black, 28% white, 17% Latino, 4% Asian, and 3% “Other,” but that doesn’t mean that every black student has this experience.  At the edges of the spectrum, however, 13% of U.S. public schools are over 90% black and Latino, while 33% of schools are less than 10% black and Latino.  To understand the reality, we need to look at the variety of students’ experiences (known in statistic-speak as “variance”) not just the average.

  1. There’s always room for improvement. “Fixing” a policy may mean making adjustments, not abandoning it altogether.

Student assessments under No Child Left Behind (2001) resulted in the narrowing of curriculum.  But, we also learned more about disadvantaged student groups and have continued closing the achievement gap for students of color.  Should we throw out testing altogether? Some would say yes, but most Americans say no.  Graduation rates, college enrollment, and achievement scores have all increased since NCLB passed in 2001.  What we can do is improve on student assessments.  Adjusting consequences for students, teachers, and schools could result in less narrowing of curriculum and subjects taught.  Involving more well-rounded tests that encourage creative and critical thinking would help teachers emphasize these skills in class.  Continued improvement in data use can help teachers and school administrators adjust their practices and policies to see continued student growth.  States have the power to make some of these changes under the new Every Student Succeeds Act without dismantling gains made under No Child Left Behind.






February 3, 2016

PARCC test results lower for computer-based tests

In school year 2014-2015, students took the Partnership for Assessment of Readiness for College and Careers (PARCC) exam on a pilot basis. The PARCC exam was created to be in alignment with the Common Core Standards and is among the few standardized assessment measures of how well school districts are teaching higher-level competencies.

On February 3, Education Week reported in an article that the results for students who took the computer-based version of the exam were significantly lower than the results for students who took a traditional pencil and paper version. While the article states that the PARCC organization does not have a response or clear answer on why this occurred, I will offer my own explanation based on my experience as a teacher of students who took this exam last year.

I taught high school History, and the largest discrepancy in the results between students who took the computer versus paper exam was at the high school level. This is my theory for the discrepancy. Throughout students’ academic careers we teachers teach them to “mark-up” the text. This means that as they read books, articles, poems, and primary sources etc. students should have a pen/pencil and highlighter in their hand. There are many acronyms for how students should “mark-up” their text. One is HACC- Highlight, Annotate, Circle unknown words, Comment. There are many others but the idea is the same. Students are taught to summarize each paragraph in the margins and make note of key words. This helps students to stay engaged with the reading, find main ideas, and critically think about what they are reading. It also makes it easier to go back and skim the text for the main ideas and remember what they read without re-reading.

Generally students are forced to mark-up/annotate the text in this way but, honestly, I still do this! And, I would bet that many adults do too. If you need to read a long article at work, many people print it out and read it with a pen in hand. It makes it easier to focus on what you are reading. Now imagine that someone is going to test you on that article. You will be even more anxious to read the article carefully and write notes for yourself in the margins.

The point is that students are taught to do this when reading, especially when reading passages for exams when there will be questions based on the passage. My own students had this drilled into them throughout the high school years when I knew and taught them. Sometime last year the teachers learned that our school would be giving the pilot version of the PARCC exam to our students. During a teacher professional development day we were asked to go online to the PARCC website and learn about the test and take a practice exam. I encourage you to go online and take it for yourself — this exam is hard! We were asked to analyze the questions and think about ways we could change our own in-class exams to better align with PARCC. We were told that it would soon replace our state’s standardized exam.

One of the first things we all noticed was how long the reading passages are for the ELA portion of the test. It took a long time to read through them and we all struggled to read it on a computer screen. I really wanted to have a printed version to write my notes down! It was long and detailed and I felt as though by the time I saw the questions I would have to re-read the whole passage to find the answer (or find the section where I could infer an answer). I knew the students would struggle with this and anticipated lower scores on this exam than the state test. I was thankful that their scores wouldn’t actually count this year. But what happens when this becomes a high-stakes test?

As I anticipated, the scores for students who took the computer-based exams were far lower than those who took a traditional paper test. The Illinois State Board of Education found that, across all grades, 50% of students scored proficient of the paper-based PARCC exam compared to only 32% of students who took the exam online. In Baltimore County, students who took the paper test scored almost 14 points higher than students of similar demographics who took the test on the computer.

The low scores on the test are a different story. Organizations will need to analyze the results of this major pilot test and determine its validity. Students and teachers, if it becomes mandatory, will have to adjust to better learn the standards and testing format associated with this test. The bigger story is that there are significant hardships that come with taking a computer-based test.

My main concern is the reading passages. I don’t believe teachers should abandon the “mark it up” technique to bend to computer-based testing because learning how to annotate a text is valuable throughout people’s lives. I saw the students struggle to stare at the computer screen and focus on the words. Many used their finger on the screen to follow along with what they were reading. It was clearly frustrating for them not to be able to underline and make notes like they were used to doing.

Other concerns are that this test is online. It requires access to the internet, a multitude of computers for students to test, and students and teacher who are technologically savvy. When my school gave the test, it took several days and a lot of scheduling and disruption to get all students to take the test given our limited number of computers. Certain rooms of the building have less reliable internet connection than others and some students lost connection while testing. Sometimes the system didn’t accept the student login or wouldn’t change to the next page. There were no PARCC IT professionals in the building to fix these issues. Instead, teachers who didn’t know the system any better than the students tried to help.

Not all students were ultimately able to take or finish the exam because of these issues. Thankfully their results didn’t matter for their graduation! There are also equity concerns between students who are familiar with computers and typing and those who do not have much exposure to technology. As a teacher in an urban school I can tell you that was not uncommon to see students typing essays on their phones because they didn’t have a computer.

As a whole, I’m not surprised by the discrepancy in test scores and I imagine that other teachers are not either. The Education Week article quotes the PARCC’s Chief of Assessment in saying “There is some evidence that, in part, the [score] differences we’re seeing may be explained by students’ familiarity with the computer-delivery system.” This vague statement only hits the tip of the iceberg. I encourage those analyzing the cause of the discrepancy to talk to teachers and students. Also, ask yourselves how well you would do taking an exam completely online, particularly when there are long reading passages. –Breanna Higgins

Filed under: Accountability,Assessments,Common Core,High school,Testing — Tags: , , — Breanna Higgins @ 4:27 pm





July 2, 2015

Testing, opt outs and equity

Spring heralds the return of many things – tulips, bare pavement, baseball, and for millions of public schoolkids, state tests. This year, however, the inevitable proved to be largely evitable. April tulips weren’t seen until late May. Much of the country experienced a white Easter. Major league games were snowed out. And tens of thousands of students just said “no” to being tested.

To be sure, the vast majority of students took their exams as expected. New York state has by far the largest number of test refusers. Yet an analysis by the New York Times estimates that only 165,000 New York students, or about one out of every six, opted out of one or more tests in 2015. Like New York, Colorado has experienced higher than usual opt outs but 83 percent of seniors still took their exams this year.

Despite the small numbers nationwide, the opt out movement is drawing attention to the test weariness that has been settling on many public school parents, teachers and students, even among those who don’t opt out. New common core tests seem to be adding to their anxiety. By making their frustrations visible, the test refusniks are starting to influence testing policy and its place in school accountability, most notably in Congress and proposed ESEA bills currently under consideration.

So who are these opt outers? The New York Times analysis found that the movement appears to be a mostly middle-class phenomenon. According to their calculations, poor districts in New York (Free & Reduced Price Lunch > 60%) had the fewest test refusers followed by the most wealthy (FRPL < 5%). An April 2015 poll by Siena College provides some other clues by identifying racial differences in voter attitudes. While a 55 percent majority of white voters in the empire state approved of opting out, only 44 percent of black and Latino voters did.

A 2015 survey from the California Public Policy Institute identified similar racial differences in opinions about the common core. Substantial majorities of Californian Latinos, Asians and blacks expressed confidence that the new standards will “make students more college and career ready” compared to less than half of white voters.

One probable reason for these racial and class differences is the role standards and assessments have played in educational equity over the last two decades. The 1994 re-authorization of ESEA laid the foundation for what would eventually become NCLB’s test-based accountability by calling on states to “establish a framework for comprehensive, standards-based education reform for all students.”  At that time, researchers and analysts were beginning to show that the achievement gap was not just a reflection of inequitable resources but also of unequal expectations. A 1994 study from the U.S. Department of Education’s Office of Research, for example, found that “students in high poverty schools … who received mostly A’s in English got about the same reading score [on NAEP] as did the ‘C’ and ‘D’ students in the most affluent schools.” In math, “the ‘A’ students in the high poverty schools most closely resembled the ‘D’ students in the most affluent schools.”  In 2001, NCLB would define further measures to correct these inequities by requiring state tests that would give the public a common, external measurement for gauging whether academic standards were being implemented equally between high- and low-poverty schools.

Indeed, the civil rights community has been among the most vocal supporters of standardized tests in accountability systems. Earlier this year, a coalition of 25 civil rights organizations led by the Leadership Conference on Civil and Human Rights released a statement of principles for ESEA reauthorization. Signatories included the NAACP, the National Council of La Raza, the National Congress of American Indians, and the National Disabilities Rights Network. Among other things, the principles call for retaining the annual testing requirements of NCLB. In May, twelve of these organizations issued another statement specifically criticizing the opt out movement, declaring:

[T]he anti-testing efforts that appear to be growing in states across the nation, like in Colorado and New York, would sabotage important data and rob us of the right to know how our students are faring. When parents ‘opt out’ of tests—even when out of protest for legitimate concerns—they’re not only making a choice for their own child, they’re inadvertently making a choice to undermine efforts to improve schools for every child.

The statement was not universally embraced. Notable civil rights leader Pedro Noguera along with the Advancement Project’s Browne Dianis and John Jackson of the Schott Foundation took exception to what they consider to be a “high-stakes, over-tested climate” for disadvantaged students. Yet their objections are not so much against tests themselves, but in how the information is used.

There is a growing consensus that the balance between assessment for improvement and assessment for accountability has become skewed toward high stakes – something many believe has a perverse effect on classroom practice. But like Mr. Noguera and his colleagues, many educators and experts also believe that standardized tests are not the problem, it’s the out-sized role they have assumed in everything from instruction to teacher evaluation. The next few months promise to launch many federal and state conversations about what a proper role for state tests should be. Ideally, it will serve ongoing improvement while assuring the public that all students are receiving the benefits of solid public education.

Filed under: Achievement Gaps,Assessments,Common Core,equity,Testing — Tags: , , , , , — Patte Barth @ 1:10 pm





March 17, 2015

Math skills needed to climb the economic ladder

economic ladder

With all the headlines about students opting-out of testing it appears there is an assumption that test scores have no connection to a student’s future success. There is certainly room to debate how much testing students should be taking and what role test results should play in student, teacher, and school accountability but it can’t be ignored that the test scores do in fact matter. No, test results are not a perfect measure of a student’s actual knowledge and skills but perfect shouldn’t be the enemy of the good. That is, test scores are a good measure of a student’s knowledge and skills and the new Common Core tests appear to be an even more accurate measure than previous state assessments that at best were good measures of basic skills.

But does it really matter how students perform on a test? Yes, especially for students from the most economically disadvantaged families. If they want to climb up the economic ladder they better perform well on their math tests. When I examined the impact of the math scores of 2004 seniors who took part in the Educational Longitudinal Study (ELS) I found that those students who came from families at the bottom quartile of socioeconomic status (SES) were more likely to move up the economic ladder, the better they performed on the ELS math assessment. For example, just 5 percent of low-SES students who scored within the lowest quartile on the math assessment moved up to the highest quartile in SES by 2012. On the other hand, 36 percent of low-SES students who had scored within the top quartile on the math assessment climbed to the top of the SES ladder by 2012. Moreover, nearly half of low-SES students remained in the lowest SES quartile in 2012 if they also scored among the lowest quartile on the math assessment. Yet, only 11 percent of low-SES students who scored among the top quartile on the math assessment remained low-SES in 2012.

Taken together this provides strong evidence that economically disadvantaged students can improve their chances of moving up the economic ladder by performing well on math tests. On the other hand, low-performance on math tests will likely lead to continued economic challenges in their adult lives.

Of course, it is not simply improving test scores that enable economically disadvantaged students to move up the economic ladder, it is the skills the higher test scores represent. As CPE’s reports on getting into and succeeding in college showed, obtaining higher math skills leads to greater success in college. Furthermore, an upcoming CPE report will also show that higher math skills also increases the chances non-college enrollees will get a good job and contribute to society as well. So there is strong evidence that increasing a student’s math knowledge, as measured by standardized tests, gives economically disadvantaged students the tools they need to climb up the economic ladder. –Jim Hull

Filed under: Assessments,Testing — Tags: , — Jim Hull @ 11:32 am





Older Posts »
RSS Feed