Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

February 7, 2017

School Improvement Grants: Why didn’t $7 billion change results for students?

Mathematica recently released a study of the federal program of Student Improvement Grants (SIG). Their findings? Schools receiving the extra funds showed no significant improvement over similar schools that did not participate. With a price tag of $7 billion (yes, with a “b”), this strikes many as a waste of taxpayer dollars. Interestingly, the study also found no evidence that the SIG schools actually had significantly higher per-pupil expenditures than similar schools that didn’t receive the grants, which may have contributed to the mediocre results.

SIG awarded up to $2 million annually to 1,400 schools, which was administered by states. The program began in the 2010-11 school year and continues through the end of the 2016-17 year. Starting in 2017-2018, the new Every Student Succeeds Act (ESSA) will allow states to use up to seven percent of their Title I allotments to improve the bottom five percent of schools. States may choose to dole out funds via formula or competitive grants, but districts are the ones responsible for using evidence-based practices to improve schools.

Under the old SIG rules, the federal government required schools to choose from one of these four turnaround models:

SIG 1

The new report analyzed transformation, turnaround, and restart models, and found no statistically significant effects for any of them. The authors did find positive, but not statistically significant, effects on math and reading scores for schools receiving the grant, but lower high school graduation rates. Critics of the new report have noted that the mathematical model chosen was not sensitive enough to detect small effects. The authors did find mixed effects each year, which many studies would have the power to find as significant, but due to the design, these remain insignificant. To give perspective of the magnitude of these effects, the effect of decreasing elementary class sizes by seven students is about 0.2 standard deviations; the effect of urban charter schools compared to their neighborhood schools after one year is 0.01 in math and -0.01 in reading (0.15 and 0.10 after four years). According to the Mathematica study, the results of SIG in 2012-2013 were 0.01 standard deviations in math and 0.08 standard deviations in reading, with a drop of in the graduation rate (note that SIG had a positive impact on the graduation rate in 2011-2012, which suggests that these results are not statistically significant, or could be zero). Not enough to conclude a positive effect, for sure, but not nothing, either.

 

SIG3

I’ll offer a couple of my own thoughts (based on research, of course) on why SIG didn’t have the success that was hoped for:

1. The authors found no evidence that the grant funds actually increased per-pupil spending. In government-speak, the funds may have supplanted other funding streams instead of supplementing them, even though the law states that federal funds are supposed to supplement other funds spent. They found that SIG schools spent about $245 more per student than similar non-SIG schools in 2011-2012, and only $100 more in 2012-2013 (again the results are not statistically significant, meaning that we can’t confidently say that the difference isn’t zero). Recent studies have shown that spending makes a difference in education, so this may help explain why we didn’t see a difference here.

2. Students in many priority schools (the bottom five percent of schools), which are the ones that qualified for SIG grants, may have had the option to transfer to higher-performing schools. While the report doesn’t address this, it seems that students with more involved parents and better academic achievement may have been more likely to utilize this offer, thus lowering the average scores of the schools they left behind. Students perform better when surrounded with higher-performing peers, which means that the lack of overall effect could have been influenced by the loss of higher achieving students.

3. Schools receiving SIG grants were high-poverty and high-minority. The average rate of students eligible for free-and-reduced price (FRL) lunches in the study group was 83 percent, with non-white students making up 91 percent of the school populations (as compared with the overall school population being about 50 percent FRL-eligible and 50 percent non-white). While the resources allocated through SIG to these schools should have made spending more equitable, schools may have still struggled with recruiting and retaining experienced, qualified teachers, which is often a challenge for high-poverty, high-minority schools. Research is clear that integrated schools have better outcomes for students than segregated schools. Yet, the reform strategies used under SIG (replacing school staff and/or converting to a charter school) did little to improve school integration.

Hopefully, states and districts will learn from these lessons and use school reforms that fundamentally change the practices of the school, not just a few personnel: increased funding, school integration, changes in instructional practices, meaningful teacher/principal mentoring and development, and/or wrap-around services for students in poverty or who have experienced trauma.






May 17, 2016

Legislatures address teacher shortages

The Center for Public Education recently released its newest report Fixing the Holes in the Teacher Pipeline: An Overview of Teacher Shortages, which comes at a critical time when many state legislatures, local districts, and other national organizations are focusing on this issue. The report lays out best practices for preparing, recruiting, and retaining quality teachers.

Indiana’s Department of Education yesterday reported that it will be implementing the recommendations by their own Blue Ribbon Commission, many of which align with the CPE’s report including; partnering with Indiana University to address the shortage of special education teachers by increasing the supports given to current and prospective special education teachers; creating a full-time position to increase professional development and networking opportunities for teachers; and hosting the first teacher recruitment conference for students currently in high school (what CPE called “growing your own”).

Nevada is faced with a critical shortage as well. EdWeek has reported that it is using both short-term and long-term strategies such as fast-track teaching certifications, hiring bonuses for working in low-income schools, developing teacher recruiter positions, and working on new contracts which would increase pay for teachers.

For all districts faced with teacher shortage issues, keep in mind the questions CPE suggests asking about your district (listed below). Also, research and I (as a former teacher) agree that although a living wage salary is crucial, teachers most often report leaving a school or the profession due to poor working conditions rather than salary complaints. -Breanna Higgins

Questions for School Boards and District Leaders:

  • Do we have enough teachers? Are there schools or subject areas in the district that are harder to staff than others? Does the demographic make-up of our staff reflect that of our students?
  • Are our teachers qualified? Are all our teachers licensed in the area of their assignment? How many teachers have emergency credentials?
  • Are we able to recruit qualified teachers? How do our salaries compare to neighboring districts? Can we provide incentives in shortage areas? How effective are our induction programs?
  • Do we retain qualified teachers? What is our turnover rate? How does it compare to other districts? Do teachers feel supported in our schools?
  • Can we grow our own? Do we have partnerships with universities? Can we collaborate on recruiting and training qualified candidates in order to maintain a steady supply of good teachers in our schools?
Filed under: Public education,Report Summary,research,School boards,teachers — Breanna Higgins @ 11:59 am





October 28, 2015

U.S. Performance Slumps According to National Report Card

U.S. Performance Slumps According to National Report Card

There is simply no way to sugar coat today’s NAEP 4th and 8th grade math and reading results. They were disappointing to say the least. With the exception of a few states and districts results remained flat or declined across both grades and subjects between 2015 and the last administration in 2013.

Specifically, national math scores declined between 2013 and 2015 at both the 4th and 8th grade levels, while reading scores dipped in 8th grade but remained steady at the 4th grade level. States didn’t fare much better during this time period either. In fact, no state made any significant improvement in 8th grade math while Mississippi, Washington, DC, and Department of Defense schools made modest gains at the 4th grade level. Of the 20 large districts that participated in NAEP in both 2013 and 2015, only Chicago improved over their 2013 results at the 8th grade level. Washington, DC, Miami-Dade, and Dallas improved their performance as well at the 4th grade level while the scores in 7 districts declined.

When it came to reading West Virginia was the lone bright spot at the 8th grade level by being the only state to post gains from 2013 to 2015. In 4th grade reading, 13 states made significant gains topped by Washington, DC (7 points), Louisiana (6 points), Mississippi (6 points), and Oklahoma (5 points) which all made gains of 5 or more points since 2013. Miami-Dade was the only district to post gains at the 8th grade level while Boston, Chicago, Cleveland, and Washington, DC made gains in 4th grade. Most districts neither saw improvement nor declines in either 4th or 8th grade.

While this year’s NAEP results are disheartening, one data point does not make a trend. Keep in mind, NAEP scores have steadily increased over the past 25 years. In fact, even with this year’s declines 8th graders still scored 19 points higher in math than 8th graders in 1990 which equates to nearly two years’ worth of learning. Since 2000 8th graders have improved their math performance by 9 points—nearly a year’s worth of learning.  So while scores declined in 2015, it does not necessarily mean our schools are less effective. The results from this and every NAEP release should be based on the larger trend which has shown steady gains over the past decade.

But this also does not mean this year’s NAEP results should be ignored. Researchers, policymakers, and educators should take a deep look at these results as well as other indicators of school quality such as results from state assessments to determine if they provide evidence on whether this year’s NAEP results are an anomaly or the start of a new downward trend. By examining NAEP scores along with other measures of school quality policymakers can make more informed decisions on what is needed to support our public schools.

 

The Findings

 

     4th Grade Math

District Level

  • Of the 20 large urban school districts that took part in NAEP in both 2013 and 2015 Washington, DC, Miami-Dade, and Dallas were the only districts to make significant gains.
    • On the other hand, 7 districts saw declines in their average 4th grade mathematics scores since 2013.
  • Charlotte, Hillsborough (FL), and Austin were the highest performing districts, while Detroit, Baltimore City, and Cleveland were the lowest performing.

State Level

  • At the state level scores increased between 2013 and 2015 in three states/jurisdictions (Mississippi, Washington, DC, and Department of Defense schools). Fifteen states had increased their scores between 2011 and 2013
    • 16 state saw declines in their average 4th grade mathematics score since 2013. No state saw declines between 2011 and 2013.
  • Massachusetts, Minnesota and New Hampshire were the highest performing states, while Alabama, New Mexico, and Washington, DC were the lowest performing.

National Level

  • Nationally, scores dropped by 2 points between 2013 and 2015.
    • Student achievement in math has increased by 27 points (2.5 year’s worth of learning) since 1990, the 1st year of NAEP.
  • The percent of students scoring at or above NAEP’s Proficient level dropped by 2 percentage points between 2013 and 2015 (42 and 40 percent respectively).
    • The proficiency rate has more than tripled since 1990 (13 percent in 1990 vs. 40 percent in 2015).
    • Moreover, the percent of students scoring below NAEP’s Basic level has increased from 17 percent in 2013 to 18 percent in 2013. In 1990 50 percent of 4th graders scored below the Basic level.

 

8th Grade Math

District Level

  • Between 2013 and 2015 Chicago was the only district to make significant gains.
    • Only Hillsborough (FL) and Houston saw declines during this time period.
  • Just as with 4th grade math, Charlotte, Austin, and Boston were the highest performing districts, while Detroit, Baltimore City, and Cleveland were the lowest performing.

State Level

  • At the 8th grade level, 22 states saw declines in their scores between 2013 and 2015, while not a single state made statistically significant improvements during this time.
  • Massachusetts continues to post the highest 8th grade math scores, with New Hampshire, Minnesota and New Jersey close behind. Washington, DC, Alabama, Louisiana and Mississippi scored the lowest.

National Level

  • Between 2013 and 2015 national scores fell 3 points for the first time. However, students in 2015 have obtained about two more years’ worth of learning in math than students in 1990.
  • The percent of students reaching NAEP’s Proficient level has more than doubled from 15 percent in 1990 to 33 percent in 2015. The percent scoring below NAEP’s Basic level decreased from 48 percent to 29 during the same time period.

4th Grade Reading

 

District Level

  • Of the 20 large urban school districts that took part in NAEP in both 2013 and 2015 Boston, Chicago, Cleveland, and Washington, DC were the only districts to make significant gains.
    • On the other hand, Baltimore City was the only district that saw declines in their scores during the same time period.
  • Hillsborough (FL), Miami-Dade and Charlotte were the highest scoring districts, while Detroit, Cleveland, and Baltimore City were the lowest scoring.

State Level

  • At the state level, scores increased between 2013 and 2015 in 13 states/jurisdictions. Only Maryland and Minnesota saw their scores decline during this time period.
  • Five states saw their scores increase by more than 5 points during this time period with Washington, DC leading the way with a 7 point gain followed by Louisiana (6 points), Mississippi (6 points) and Oklahoma (5 points).
  • Massachusetts, Department of Defense schools, and New Hampshire were the highest performing states, while New Mexico, Washington, DC, California, and Alaska were the lowest performing.

National Level

  • Nationally, scores increased by 1 point from 2013 and 2015 but the increase was not statistically significant, meaning the increase likely happened by chance.
  • The percent of students scoring at or above NAEP’s Proficient level increased by 1 percentage point between 2013 and 2015 (35 and 36 percent respectively) but the increase was not statistically significant either.
    • The proficiency rate has increased from 29 percent in 1992 to 36 percent in 2015.
    • Moreover, the percent of students scoring below NAEP’s Basic level has decreased from 32 percent in 2013 to 31 percent in 2015. In 1992 38 percent of 4th graders scored below the Basic level.

8th Grade Reading

District Level

  • Between 2013 and 2015 Miami-Dade was the only district to make significant gains.
    • Only Hillsborough (FL), Albuquerque and Baltimore City saw declines during this time period.
  • Among the highest performing districts were Charlotte, Austin, Miami-Dade and San Diego, while Detroit, Baltimore City, Cleveland, and Fresno were the lowest performing.

State Level

  • At the 8th grade level, 8 states saw declines in their scores between 2013 and 2015, while West Virginia was the only state to increase their score during this time.
  • Department of Defense schools posted the highest reading scores, with New Hampshire, Massachusetts and Vermont close behind. On the other hand, Washington, DC, Mississippi, and New Mexico scored the lowest.

National Level

  • Between 2013 and 2015 scores fell 3 points bring the overall score back down to the 2011 level of 265 which had been the all-time prior to 2013.
  • The percent of students reaching NAEP’s proficient level decreased from 36 to 34 percent between 2013 and 2015. During this same time period the percent scoring below NAEP’s Basic level increased from 22 percent to 24 percent.
Filed under: NAEP,Report Summary — Jim Hull @ 3:39 pm





October 27, 2015

Fewer, better tests

TestingParents have been concerned about the amount of testing their children have been subjected to in recent years. To the point where some are choosing to opt their children out of certain standardized tests. Yet, a number of educators, policymakers and education organizations have expressed the need for such tests to identify those students whose needs are not being fully met—particularly poor, minority and other traditionally disadvantaged students. Unfortunately, it has been unclear how much testing is actually taking place in our nation’s schools.

But yesterday, a report from the Council of Great City Schools (CGCS) provided the most comprehensive examination of testing to date that shed an important light on the quantity and quality of testing students are exposed to. Among the findings the report found:

  • The average eighth-grader spends 25.3 hours per year taking mandated assessments which accounts for 4.22 days or 2.34 percent of total instructional time.
    • Only 8.9 hours of this testing is due to NCLB mandated assessments.
    • Formative assessments are most likely to be given three times a year and account for 10.8 hours of testing for eighth-graders 
  • There is no correlation between the amount of mandated testing and the performance on the National Assessment for Education Progress (NAEP).
  • Urban school districts have more tests designed for diagnostic purposes than other uses.
  • Opt-out rates in the 66 school districts that participated in the study were typically less than 1 percent.
  • 78 percent of parents surveyed agreed or strongly agreed with the statement “accountability for how well my child is educated is important, and it begins with accurate measurement of what he/she is learning in school.”
    • Yet, fewer agreed when the word ‘test’ appears.
  • Parents support ‘better’ tests but are not necessarily as supportive of ‘harder’ or ‘more rigorous’ tests.

These are much needed findings in the debate about testing, which has been dominated by anecdotal accounts and theoretical arguments. CGCS’s report has provided much needed facts to inform policymakers on time spent on testing, as well as, the quality and usefulness of the tests. In fact, these findings led President Obama to propose the amount of time students spend on mandatory tests be limited to 2 percent of instructional time.

While limiting the time students spend taking tests is a good thing, the report highlights the fact that over-testing is not necessarily a quantity problem but a quality problem. For example, the report found that many of the tests were not aligned to each other nor aligned to college- and career-ready standards. Meaning, many students were administered unnecessary and redundant tests that provided little, if any, information to improve instruction. Moreover, results for many tests, including some formative assessments, were not available for months after they were taken, thereby failing to provide teachers information in-time to adjust their instruction. So, the information for many tests are neither timely nor useful.

For testing to drive quality instruction, testing systems must be aligned to college- and career-ready standards and provide usable and timely information.  Doing so does not necessarily lead to less testing time but it does lead to a more efficient testing system. While there is plenty of blame to go around for the lack of a coherent testing system, district leaders play a lead role in ensuring that each and every test is worth taking. Tools such as Achieve’s Student Assessment Inventory for School Districts can inform district leaders on how much testing is actually taking place in their classrooms and why. With such information in-hand they can make more informed decisions on which tests to continue using and which should be eliminated, as well as, if there is a need for better tests that provide a more accurate measure of what students are expected to learn. By doing so, it will create a more coherent testing system that consists of fewer and better test that will drive quality instruction that will in-turn improve student outcomes. – Jim Hull






September 3, 2015

Fewer High School Grads Ready for College According to Latest Recent SAT results

Just as last year, this year’s SAT results included results from the College Board’s two other testing programs— the PSAT/NMSQT and their Advanced Placement (AP) exams— providing a more complete picture of student progress towards college readiness throughout high school.

While ACT results released last week showed overall scores among the graduating class of 2015 remaining flat, SAT scores saw a significant drop. In fact, scores on the college-entrance exam are at the lowest level in the ten years since the College Board included a writing section to go along with the critical reading and mathematics sections. Not only have SAT scores been declining in the long-run, scores dropped by 7 points in just the past year alone. Making it the largest one-year drop since the inclusion of the writing section. Furthermore, scores dropped in each of the three sections as well.

Stark differences are also evident when it comes to the ACT and SAT college-readiness benchmarks. According to the ACT, slightly more students are graduating high school college-ready than in the previous year. Yet, SAT results show fewer students are graduating college-ready. Although each exam has their own method of determining college-readiness, it would be expected that the year-to-year changes would be somewhat similar. However, that is not the case for the 2015 results.

Since neither the ACT nor SAT are representative of all high school graduates nationwide it is impossible to pinpoint why the two tests are providing such conflicting information about the quality of our nation’s high schools. That is because in most states these tests are optional, so only those students expecting to go onto a four-year college are likely to take the exams. Furthermore, there are a number of students who take both the SAT and ACT, so their scores are counted twice which can impact scores as well. Furthermore, the ACT and SAT measure different skills, although in the coming years this will be less of an issue as the SAT will be redesigned to align with the Common Core which the ACT already is.

However, there can be a number of reasons why ACT and SAT are providing such conflicting reports. It could be that since the ACT has become more popular throughout the country and more colleges are accepting the ACT that fewer higher-performing students in traditional ACT states may be taking the SAT but still taking the ACT. It could also be that more lower-performing students, who previously would not have taken the SAT, are now taking the college-entrance exam which would lower SAT scores, at least the short-run.

Unfortunately, there is not a clear answer. But considering the fact that almost every other indicator of the effectiveness of our nation’s high schools points in a positive direction, we shouldn’t put too much weight on one indicator such as the SAT. We know that more students than ever are graduating on-time with a regular diploma and do so by having completed more rigorous courses. Moreover, more of these graduates are going on to college than ever before. Yet, despite these positive results this year’s SAT results paint a much dimmer picture. With that said, it will be important to keep our eyes on the SAT results in the coming years to see if this year’s results are an anomaly or the start of trend. In the meantime, educators, school board members, and other policymakers shouldn’t put too much stock in one year’s results but should dig deeper into the SAT results for their local schools to see what they can learn so they can better prepare future graduates to get into and succeed in college.—Jim Hull

 

The Findings

Overall SAT Scores

  • The combined score in each of the three SAT sections- Critical Reading, Mathematics, and Writing— were at a 10-year low of 1490 when the Writing section was first introduced.
  • The combined scored dropped 7 points in just one year. This is the largest drop in a single year since the Writing section was introduced.
  • Scores dropped in all three sections from 2014 to 2015.
    • Critical Reading declined from 497 to 495.
    • Mathematics scores fell from 513 to 511.
    • Writing scores dropped from 487 to 484.

College Readiness

  • Less than half (41.9 percent) of the test-takers met the SAT College-Ready Benchmark in 2015, which is a decrease from 2014 when the rate was 43 percent.
    • The SAT College-Ready Benchmarks represent a student who scores a combined 1550 or higher. Students hitting this benchmark have a 65 percent chance of earning a B-minus grade point average in their freshman year courses.
  • Minority students less likely to be college-ready.
    • Just 16.1 percent of black students and 22.7 percent of Hispanic students were college-ready, according to the SAT’s Benchmark.
      • More black students reached the college-ready benchmark in 2015 than in 2014 (15.8 percent).
      • However, fewer Hispanic students reached the college-ready benchmark in 2015 compared to 2014 (23.4 percent).
    • On the other hand, over half (52.8 percent) of white students met the benchmark in 2015 and 61.3 percent of Asian’s students.

SAT Test Takers

  • Just over 1.7 million students from the Class of 2015 took the SAT sometime during their high school which was a 3 percent increase from 2011.
  • More minority students taking the SAT.
    • Nearly a third (32.5 percent) of test-takers were underrepresented minorities in 2015, compared to 31.3 percent just a year earlier and 29 percent in 2011.

PSAT/MNSQT (10th grade exams) Results

  • Nearly 4 out of 10 10th graders who took the College Board’s PSAT or NMSQT exams in 2015 scored at the grade-level benchmark that indicates they were on track for college and career readiness.
  • Just 16.7 percent of black 10th graders who took the PSAT/NMSQT reached the grade-level benchmark in 2015 while 54.7 of white examinees did so.
  • Only 19.8 percent of Hispanic examinees met the grade-level benchmark while 61.5 of Asian examinees did so.

 

Advanced Placement (AP)

  • In 2015, 2.5 million students took at least one AP exam compared to 2.3 million a year earlier and 2.0 in 2011.
    • In total, 4.5 million AP exams were administered in 2015 compared to 4.2 million in 2014 and 3.5 million in 2011.
  • As more students took an AP exam more students passed an AP exam as well. The number of students scoring a 3 or higher on at least one AP exam increased from 1.4 million in 2014 to 1.5 million in 2015. In 2011, just 1.2 million students passed at least one AP exam.
    • Minority students less likely to pass at least one AP exam.
      • A third (32.3 percent) of black students who took at least one AP exam scored a 3 or higher compared to 66 percent of white examinees.
      • Half of Hispanic examinees passed at least one AP exam.
      • Nearly three-quarters (72.2 percent) of Asian examinees scored 3 or higher on at least one AP exam.
  • Over a quarter (26.2) of students who took an AP exam were from an underrepresented minority group which is slightly higher than in 2014, when the percentage was 26.0 percent. However, it is a significant increase from the 23.9 percent in 2011.





Older Posts »
RSS Feed