Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

September 14, 2017

New research: High-stakes tests influence teacher assignment decisions, impacting long-term student achievement

A new study released last month raises potential concerns about the ways in which teacher assignment decisions may impact student achievement. The study, which included data from the Miami-Dade County Public School district between 2003-2012, examined whether less-effective teachers were assigned to untested grades, and how those assignments affect students’ long-term academic achievement.

Previous studies have found that principals do take into account students’ academic growth when making decisions about teacher grade level assignments. One major factor in this decision is student scores on high-stakes standardized tests. Additionally, there has been evidence that less-effective teachers are more likely to be re-assigned to a low-stakes, untested classroom for the following school year. To further clarify whether teachers are re-assigned based on test scores, researchers measured the effect that a teacher has on students’ test score growth year over year. (Low-stakes tests given across the M-DCPS district were used to measure academic growth at the K-2 level.) They then examined the relationship between student test score growth and teacher grade level assignment in the following school year.

Researchers found that highly effective teachers in grades K-2, grades in which students are not subject to state tests, were more likely to be reassigned to grades three to five —tested, high-stakes grades— in the following school year. In contrast, highly effective teachers in third through fifth grades were unlikely to be reassigned to an untested grade. However, their lower-performing peers those third, fourth and fifth-grade teachers whose students made the least progress were more likely to be assigned to an untested K-2 grade in the following year. Researchers believe that by reassigning less-effective teachers out of tested grades, principals hope to improve student test scores over the short term. But what are the long-term consequences of concentrating the least-effective teachers in the “low-stakes” grades?

Though high-stakes standardized testing at the elementary level is focused in grades three through five, foundational skills learned in grades K-2, such as basic math and early literacy, drive success at all levels. After finding that lower-performing teachers are more likely to be reassigned to an untested grade, the researchers examined the effect that the resulting concentration of less-effective K-2 teachers could have on a student’s long-term achievement. Second graders taught by a teacher who had recently been reassigned from a tested grade had significantly lower gains in both literacy and math than their peers taught by teachers who had not been reassigned. Crucially, these effects carried into the following school year: a student taught by a recently reassigned teacher in second grade would also have lower third grade scores than their peers, reflecting a gap equivalent to having been taught by a first-year teacher during the second grade.

Clustering the least-effective teachers in untested grades— particularly K-2, where foundational skills like reading are taught— may have long-term consequences for student learning. Researchers have found that despite these lower gains for students over the long term, principals tend to focus on short-term staffing needs, and concentrate the highest-performing teachers in high-stakes, tested grades. These findings should raise questions for any district: How are student test scores used in staffing decisions, and how do those decisions affect student learning long term?






February 7, 2017

School Improvement Grants: Why didn’t $7 billion change results for students?

Mathematica recently released a study of the federal program of Student Improvement Grants (SIG). Their findings? Schools receiving the extra funds showed no significant improvement over similar schools that did not participate. With a price tag of $7 billion (yes, with a “b”), this strikes many as a waste of taxpayer dollars. Interestingly, the study also found no evidence that the SIG schools actually had significantly higher per-pupil expenditures than similar schools that didn’t receive the grants, which may have contributed to the mediocre results.

SIG awarded up to $2 million annually to 1,400 schools, which was administered by states. The program began in the 2010-11 school year and continues through the end of the 2016-17 year. Starting in 2017-2018, the new Every Student Succeeds Act (ESSA) will allow states to use up to seven percent of their Title I allotments to improve the bottom five percent of schools. States may choose to dole out funds via formula or competitive grants, but districts are the ones responsible for using evidence-based practices to improve schools.

Under the old SIG rules, the federal government required schools to choose from one of these four turnaround models:

SIG 1

The new report analyzed transformation, turnaround, and restart models, and found no statistically significant effects for any of them. The authors did find positive, but not statistically significant, effects on math and reading scores for schools receiving the grant, but lower high school graduation rates. Critics of the new report have noted that the mathematical model chosen was not sensitive enough to detect small effects. The authors did find mixed effects each year, which many studies would have the power to find as significant, but due to the design, these remain insignificant. To give perspective of the magnitude of these effects, the effect of decreasing elementary class sizes by seven students is about 0.2 standard deviations; the effect of urban charter schools compared to their neighborhood schools after one year is 0.01 in math and -0.01 in reading (0.15 and 0.10 after four years). According to the Mathematica study, the results of SIG in 2012-2013 were 0.01 standard deviations in math and 0.08 standard deviations in reading, with a drop of in the graduation rate (note that SIG had a positive impact on the graduation rate in 2011-2012, which suggests that these results are not statistically significant, or could be zero). Not enough to conclude a positive effect, for sure, but not nothing, either.

 

SIG3

I’ll offer a couple of my own thoughts (based on research, of course) on why SIG didn’t have the success that was hoped for:

1. The authors found no evidence that the grant funds actually increased per-pupil spending. In government-speak, the funds may have supplanted other funding streams instead of supplementing them, even though the law states that federal funds are supposed to supplement other funds spent. They found that SIG schools spent about $245 more per student than similar non-SIG schools in 2011-2012, and only $100 more in 2012-2013 (again the results are not statistically significant, meaning that we can’t confidently say that the difference isn’t zero). Recent studies have shown that spending makes a difference in education, so this may help explain why we didn’t see a difference here.

2. Students in many priority schools (the bottom five percent of schools), which are the ones that qualified for SIG grants, may have had the option to transfer to higher-performing schools. While the report doesn’t address this, it seems that students with more involved parents and better academic achievement may have been more likely to utilize this offer, thus lowering the average scores of the schools they left behind. Students perform better when surrounded with higher-performing peers, which means that the lack of overall effect could have been influenced by the loss of higher achieving students.

3. Schools receiving SIG grants were high-poverty and high-minority. The average rate of students eligible for free-and-reduced price (FRL) lunches in the study group was 83 percent, with non-white students making up 91 percent of the school populations (as compared with the overall school population being about 50 percent FRL-eligible and 50 percent non-white). While the resources allocated through SIG to these schools should have made spending more equitable, schools may have still struggled with recruiting and retaining experienced, qualified teachers, which is often a challenge for high-poverty, high-minority schools. Research is clear that integrated schools have better outcomes for students than segregated schools. Yet, the reform strategies used under SIG (replacing school staff and/or converting to a charter school) did little to improve school integration.

Hopefully, states and districts will learn from these lessons and use school reforms that fundamentally change the practices of the school, not just a few personnel: increased funding, school integration, changes in instructional practices, meaningful teacher/principal mentoring and development, and/or wrap-around services for students in poverty or who have experienced trauma.






May 17, 2016

Legislatures address teacher shortages

The Center for Public Education recently released its newest report Fixing the Holes in the Teacher Pipeline: An Overview of Teacher Shortages, which comes at a critical time when many state legislatures, local districts, and other national organizations are focusing on this issue. The report lays out best practices for preparing, recruiting, and retaining quality teachers.

Indiana’s Department of Education yesterday reported that it will be implementing the recommendations by their own Blue Ribbon Commission, many of which align with the CPE’s report including; partnering with Indiana University to address the shortage of special education teachers by increasing the supports given to current and prospective special education teachers; creating a full-time position to increase professional development and networking opportunities for teachers; and hosting the first teacher recruitment conference for students currently in high school (what CPE called “growing your own”).

Nevada is faced with a critical shortage as well. EdWeek has reported that it is using both short-term and long-term strategies such as fast-track teaching certifications, hiring bonuses for working in low-income schools, developing teacher recruiter positions, and working on new contracts which would increase pay for teachers.

For all districts faced with teacher shortage issues, keep in mind the questions CPE suggests asking about your district (listed below). Also, research and I (as a former teacher) agree that although a living wage salary is crucial, teachers most often report leaving a school or the profession due to poor working conditions rather than salary complaints. -Breanna Higgins

Questions for School Boards and District Leaders:

  • Do we have enough teachers? Are there schools or subject areas in the district that are harder to staff than others? Does the demographic make-up of our staff reflect that of our students?
  • Are our teachers qualified? Are all our teachers licensed in the area of their assignment? How many teachers have emergency credentials?
  • Are we able to recruit qualified teachers? How do our salaries compare to neighboring districts? Can we provide incentives in shortage areas? How effective are our induction programs?
  • Do we retain qualified teachers? What is our turnover rate? How does it compare to other districts? Do teachers feel supported in our schools?
  • Can we grow our own? Do we have partnerships with universities? Can we collaborate on recruiting and training qualified candidates in order to maintain a steady supply of good teachers in our schools?
Filed under: Public education,Report Summary,research,School boards,teachers — Breanna Higgins @ 11:59 am





October 28, 2015

U.S. Performance Slumps According to National Report Card

U.S. Performance Slumps According to National Report Card

There is simply no way to sugar coat today’s NAEP 4th and 8th grade math and reading results. They were disappointing to say the least. With the exception of a few states and districts results remained flat or declined across both grades and subjects between 2015 and the last administration in 2013.

Specifically, national math scores declined between 2013 and 2015 at both the 4th and 8th grade levels, while reading scores dipped in 8th grade but remained steady at the 4th grade level. States didn’t fare much better during this time period either. In fact, no state made any significant improvement in 8th grade math while Mississippi, Washington, DC, and Department of Defense schools made modest gains at the 4th grade level. Of the 20 large districts that participated in NAEP in both 2013 and 2015, only Chicago improved over their 2013 results at the 8th grade level. Washington, DC, Miami-Dade, and Dallas improved their performance as well at the 4th grade level while the scores in 7 districts declined.

When it came to reading West Virginia was the lone bright spot at the 8th grade level by being the only state to post gains from 2013 to 2015. In 4th grade reading, 13 states made significant gains topped by Washington, DC (7 points), Louisiana (6 points), Mississippi (6 points), and Oklahoma (5 points) which all made gains of 5 or more points since 2013. Miami-Dade was the only district to post gains at the 8th grade level while Boston, Chicago, Cleveland, and Washington, DC made gains in 4th grade. Most districts neither saw improvement nor declines in either 4th or 8th grade.

While this year’s NAEP results are disheartening, one data point does not make a trend. Keep in mind, NAEP scores have steadily increased over the past 25 years. In fact, even with this year’s declines 8th graders still scored 19 points higher in math than 8th graders in 1990 which equates to nearly two years’ worth of learning. Since 2000 8th graders have improved their math performance by 9 points—nearly a year’s worth of learning.  So while scores declined in 2015, it does not necessarily mean our schools are less effective. The results from this and every NAEP release should be based on the larger trend which has shown steady gains over the past decade.

But this also does not mean this year’s NAEP results should be ignored. Researchers, policymakers, and educators should take a deep look at these results as well as other indicators of school quality such as results from state assessments to determine if they provide evidence on whether this year’s NAEP results are an anomaly or the start of a new downward trend. By examining NAEP scores along with other measures of school quality policymakers can make more informed decisions on what is needed to support our public schools.

 

The Findings

 

     4th Grade Math

District Level

  • Of the 20 large urban school districts that took part in NAEP in both 2013 and 2015 Washington, DC, Miami-Dade, and Dallas were the only districts to make significant gains.
    • On the other hand, 7 districts saw declines in their average 4th grade mathematics scores since 2013.
  • Charlotte, Hillsborough (FL), and Austin were the highest performing districts, while Detroit, Baltimore City, and Cleveland were the lowest performing.

State Level

  • At the state level scores increased between 2013 and 2015 in three states/jurisdictions (Mississippi, Washington, DC, and Department of Defense schools). Fifteen states had increased their scores between 2011 and 2013
    • 16 state saw declines in their average 4th grade mathematics score since 2013. No state saw declines between 2011 and 2013.
  • Massachusetts, Minnesota and New Hampshire were the highest performing states, while Alabama, New Mexico, and Washington, DC were the lowest performing.

National Level

  • Nationally, scores dropped by 2 points between 2013 and 2015.
    • Student achievement in math has increased by 27 points (2.5 year’s worth of learning) since 1990, the 1st year of NAEP.
  • The percent of students scoring at or above NAEP’s Proficient level dropped by 2 percentage points between 2013 and 2015 (42 and 40 percent respectively).
    • The proficiency rate has more than tripled since 1990 (13 percent in 1990 vs. 40 percent in 2015).
    • Moreover, the percent of students scoring below NAEP’s Basic level has increased from 17 percent in 2013 to 18 percent in 2013. In 1990 50 percent of 4th graders scored below the Basic level.

 

8th Grade Math

District Level

  • Between 2013 and 2015 Chicago was the only district to make significant gains.
    • Only Hillsborough (FL) and Houston saw declines during this time period.
  • Just as with 4th grade math, Charlotte, Austin, and Boston were the highest performing districts, while Detroit, Baltimore City, and Cleveland were the lowest performing.

State Level

  • At the 8th grade level, 22 states saw declines in their scores between 2013 and 2015, while not a single state made statistically significant improvements during this time.
  • Massachusetts continues to post the highest 8th grade math scores, with New Hampshire, Minnesota and New Jersey close behind. Washington, DC, Alabama, Louisiana and Mississippi scored the lowest.

National Level

  • Between 2013 and 2015 national scores fell 3 points for the first time. However, students in 2015 have obtained about two more years’ worth of learning in math than students in 1990.
  • The percent of students reaching NAEP’s Proficient level has more than doubled from 15 percent in 1990 to 33 percent in 2015. The percent scoring below NAEP’s Basic level decreased from 48 percent to 29 during the same time period.

4th Grade Reading

 

District Level

  • Of the 20 large urban school districts that took part in NAEP in both 2013 and 2015 Boston, Chicago, Cleveland, and Washington, DC were the only districts to make significant gains.
    • On the other hand, Baltimore City was the only district that saw declines in their scores during the same time period.
  • Hillsborough (FL), Miami-Dade and Charlotte were the highest scoring districts, while Detroit, Cleveland, and Baltimore City were the lowest scoring.

State Level

  • At the state level, scores increased between 2013 and 2015 in 13 states/jurisdictions. Only Maryland and Minnesota saw their scores decline during this time period.
  • Five states saw their scores increase by more than 5 points during this time period with Washington, DC leading the way with a 7 point gain followed by Louisiana (6 points), Mississippi (6 points) and Oklahoma (5 points).
  • Massachusetts, Department of Defense schools, and New Hampshire were the highest performing states, while New Mexico, Washington, DC, California, and Alaska were the lowest performing.

National Level

  • Nationally, scores increased by 1 point from 2013 and 2015 but the increase was not statistically significant, meaning the increase likely happened by chance.
  • The percent of students scoring at or above NAEP’s Proficient level increased by 1 percentage point between 2013 and 2015 (35 and 36 percent respectively) but the increase was not statistically significant either.
    • The proficiency rate has increased from 29 percent in 1992 to 36 percent in 2015.
    • Moreover, the percent of students scoring below NAEP’s Basic level has decreased from 32 percent in 2013 to 31 percent in 2015. In 1992 38 percent of 4th graders scored below the Basic level.

8th Grade Reading

District Level

  • Between 2013 and 2015 Miami-Dade was the only district to make significant gains.
    • Only Hillsborough (FL), Albuquerque and Baltimore City saw declines during this time period.
  • Among the highest performing districts were Charlotte, Austin, Miami-Dade and San Diego, while Detroit, Baltimore City, Cleveland, and Fresno were the lowest performing.

State Level

  • At the 8th grade level, 8 states saw declines in their scores between 2013 and 2015, while West Virginia was the only state to increase their score during this time.
  • Department of Defense schools posted the highest reading scores, with New Hampshire, Massachusetts and Vermont close behind. On the other hand, Washington, DC, Mississippi, and New Mexico scored the lowest.

National Level

  • Between 2013 and 2015 scores fell 3 points bring the overall score back down to the 2011 level of 265 which had been the all-time prior to 2013.
  • The percent of students reaching NAEP’s proficient level decreased from 36 to 34 percent between 2013 and 2015. During this same time period the percent scoring below NAEP’s Basic level increased from 22 percent to 24 percent.
Filed under: NAEP,Report Summary — Jim Hull @ 3:39 pm





October 27, 2015

Fewer, better tests

TestingParents have been concerned about the amount of testing their children have been subjected to in recent years. To the point where some are choosing to opt their children out of certain standardized tests. Yet, a number of educators, policymakers and education organizations have expressed the need for such tests to identify those students whose needs are not being fully met—particularly poor, minority and other traditionally disadvantaged students. Unfortunately, it has been unclear how much testing is actually taking place in our nation’s schools.

But yesterday, a report from the Council of Great City Schools (CGCS) provided the most comprehensive examination of testing to date that shed an important light on the quantity and quality of testing students are exposed to. Among the findings the report found:

  • The average eighth-grader spends 25.3 hours per year taking mandated assessments which accounts for 4.22 days or 2.34 percent of total instructional time.
    • Only 8.9 hours of this testing is due to NCLB mandated assessments.
    • Formative assessments are most likely to be given three times a year and account for 10.8 hours of testing for eighth-graders 
  • There is no correlation between the amount of mandated testing and the performance on the National Assessment for Education Progress (NAEP).
  • Urban school districts have more tests designed for diagnostic purposes than other uses.
  • Opt-out rates in the 66 school districts that participated in the study were typically less than 1 percent.
  • 78 percent of parents surveyed agreed or strongly agreed with the statement “accountability for how well my child is educated is important, and it begins with accurate measurement of what he/she is learning in school.”
    • Yet, fewer agreed when the word ‘test’ appears.
  • Parents support ‘better’ tests but are not necessarily as supportive of ‘harder’ or ‘more rigorous’ tests.

These are much needed findings in the debate about testing, which has been dominated by anecdotal accounts and theoretical arguments. CGCS’s report has provided much needed facts to inform policymakers on time spent on testing, as well as, the quality and usefulness of the tests. In fact, these findings led President Obama to propose the amount of time students spend on mandatory tests be limited to 2 percent of instructional time.

While limiting the time students spend taking tests is a good thing, the report highlights the fact that over-testing is not necessarily a quantity problem but a quality problem. For example, the report found that many of the tests were not aligned to each other nor aligned to college- and career-ready standards. Meaning, many students were administered unnecessary and redundant tests that provided little, if any, information to improve instruction. Moreover, results for many tests, including some formative assessments, were not available for months after they were taken, thereby failing to provide teachers information in-time to adjust their instruction. So, the information for many tests are neither timely nor useful.

For testing to drive quality instruction, testing systems must be aligned to college- and career-ready standards and provide usable and timely information.  Doing so does not necessarily lead to less testing time but it does lead to a more efficient testing system. While there is plenty of blame to go around for the lack of a coherent testing system, district leaders play a lead role in ensuring that each and every test is worth taking. Tools such as Achieve’s Student Assessment Inventory for School Districts can inform district leaders on how much testing is actually taking place in their classrooms and why. With such information in-hand they can make more informed decisions on which tests to continue using and which should be eliminated, as well as, if there is a need for better tests that provide a more accurate measure of what students are expected to learn. By doing so, it will create a more coherent testing system that consists of fewer and better test that will drive quality instruction that will in-turn improve student outcomes. – Jim Hull






Older Posts »
RSS Feed