Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

November 13, 2017

ESSA growth vs. proficiency: a former teacher’s perspective

Right now, state education departments are working to try to come up with a plan that meets all the requirements of the Federal government’s Every Student Succeeds Act (ESSA). One area that has received a lot of attention in ESSA is the student accountability section and the required indicators that hold schools accountable for student learning.

The first indicator under ESSA is known as the academic achievement indicator and requires states to annually measure English/Language Arts (ELA) and Math proficiency using statewide assessments. To simplify, I would call this a proficiency indicator, where states use the information from state standardized tests to see if students are meeting grade level standards. When I was a 4th grade teacher, this information was incredibly useful for me. I needed to know what level my student’s were performing at in language arts and math so that I could scaffold lesson plans, create student groups and understand which students needed to do the most catching up. These scores helped me also talk to parents about where their child was performing in relation to where he/she should be performing as a 4th grader. However, a student’s proficiency score was only a part of the puzzle, which is where the second indicator under ESSA comes in to complete the picture.

The second ESSA accountability indicator is the academic progress indicator, which looks at the growth or progress that an individual student or subgroup of students has made in elementary and middle school. States have created different policies to measure this, but the general goal is to measure an individual student’s growth over a period of time.

When I was a teacher I also had a method of measuring this for each of my students. For example, in reading I would assess their starting reading level at the beginning of the year and then map out an individual plan for each student. Each student would have to grow between 6 or 8 reading levels, depending on where they started, with the overall goal of growing the equivalent of two grade levels. Some of my students did grow two grade levels, but they would still be below where they should be at that grade. For others the two-grade boost would put them way above the 4th grade reading level.  It is important for teachers and students to understand and celebrate their progress at multiple checkpoints throughout the year that are not in the form of state tests. In my classroom, this gave students a sense of purpose for their assignments because they wanted to meet the individual goal that we had set together. As a teacher, I also would constantly adjust assignments, homework, student pairs, etc. based on the new levels that students reached throughout the year.

For me, both proficiency and growth measures were crucial for the success of my students. The growth measure made learning real for students as they saw their reading levels steadily increase throughout the year. But I couldn’t rely on growth measures alone. The proficiency measure provided that benchmark to help me know what level fourth grade students should be able to perform at by the end of the year. Without this, I could not have identified the achievement gaps in my classroom and would also not have been able to communicate these to the parents of my students. After understanding the difference between growth and proficiency indicators and how to use the data from each to inform my instruction as a teacher, I do not think that it is a matter of one being more important than the other, but rather both working together to paint a more holistic picture of student learning.

data tracker picture

Filed under: Accountability,Assessments,CPE,Testing — Tags: , , , — Annie Hemphill @ 3:28 pm





April 28, 2017

New federal study of DC voucher program shows academic decline

A new federal analysis of the District of Columbia’s voucher program has found that students who transferred to private schools posted similar and, in some cases, worse scores than their peers who remained in public schools.

The findings appear to be the first time the Institute of Education Sciences (the research arm of the U.S. Department of Education) has noted that voucher recipients performed worse on some academic measures than DC public school pupils in general.

It comes on the heels of new research on Louisiana and Ohio’s statewide voucher programs, which showed precipitous declines in test scores between students who took advantage of the voucher and transferred to a private school and similar students who stayed in public schools.

Created by Congress and signed into law by President Bush in 2004, the Opportunity Scholarship Program was intended to provide low-income families in the District of Columbia with tuition subsidies to attend private schools. Reauthorized in 2011 as the Scholarships for Opportunity and Results (SOAR) Act, it was the first and remains the only federally-funded voucher program in the U.S.

Ongoing evaluation of SOAR was a key feature of the 2004 and 2011 bill, hence IES has conducted numerous studies in the past that looked at student outcomes, parent satisfaction and general characteristics of the participants. But this is the first time researchers have observed a sharp difference between the test scores of SOAR participants and non-participants. Before we get to the specifics, some background: the study’s sample included students who applied to the program in 2012, 2013 and 2014 and were either offered or not offered a scholarship; the difference between the two on a variety of measures was studied one year after SOAR students transferred to private schools.

Among the report’s highlights:

  • Math scores dropped, on average, 7.3 percentile points for voucher recipients compared to students who applied but had not been selected for the program.
  • Reading scores dropped among elementary students (7.1 percentile points) who participated in SOAR compared to those who did not, but there was little discernible difference at the secondary level between these two groups.
  • Students who transferred from low performing schools (the very students the program is intended to help) saw no significant gain on their test scores one year after transferring to private school.
  • Meanwhile, voucher participants who had not transferred from schools designated as “in need of improvement” saw their math scores drop, on average, 14.1 percentile points and their reading scores by 11.3 percentile points compared to students who were in public schools.

While these findings aren’t as dramatic as Louisiana, where students saw a 27 percentile point drop in math one year after transferring to private schools, it’s yet another chink in the, let’s face it, drafty armor known as school choice.

To be clear, there’s nothing wrong with having options. The problem is when one equates more options with better outcomes. This is not always the case, as this and other studies are showing.






March 31, 2017

Public Charter Schools and Accountability

Earlier this week, the Brookings Institution released the fifth annual Education Choice and Competition Index, which ranks school choice in the largest school districts in the U.S.

During her address, Secretary of Education Betsy Devos claimed that “parents are the primary point of accountability.” When asked about policies that ensure that schools of choice are actually improving student performance, she answered that “the policies around empowering parents and moving the decision-making to the hands of parents on behalf of children is really the direction we need to go.” She later repeated the idea that transparency and information, coupled with parental choice, equated to accountability.

While it is indeed important to communicate information on school choice, transparency and information are only part of the accountability puzzle. In addition to these components, states also use accountability to ensure that schools that fail to meet academic or financial standards are improved or closed.

This is of particular importance for public charter schools, who have been given the authority to operate independently of school districts and many state rules or regulations. Accountability rules assure that students are learning and that public funds are spent responsibly.

While the accountability measures used for charter schools to demonstrate quality performance vary from state to state, they do exist, and they include more than just reporting information to parents.

Forty-three states had charter school laws in place when we completed this analysis (not including Kentucky, which passed a bill in March 2017 to allow charter schools). We examined four points of accountability within the charter school policies as recorded by the Education Commission of the States: annual reporting, specifications for termination, performance-thresholds, and technical assistance.

Annual Reporting

Most states require charter schools to submit annual reports as a part of their accountability obligations. Some annual reporting requirements include annual report cards, education progress reports, curriculum development, attendance rates, graduation rates, and college admission test scores. Many states that do not require annual reports still require financial reports, which speaks to the other side of accountability, appropriate usage of funds.

  • Some states, such as Washington, require charter schools to provide the same annual school performance reports as non-charter schools.
  • In Ohio, each charter is required to disseminate the state Department of Education’s school report card report to all parents.
  • North Carolina requires its charter schools to publish their report performance ratings, awarded by the State Board of Education, on the internet. If the rating is D or F, the charter school must send written notice to parents. North Carolina also requires specific data reporting related to student reading.

State Specification for Termination

Forty-two states specify the grounds for terminating a charter school, fostering accountability by establishing standards and consequences of failure to adhere to those standards. Failure to demonstrate academic achievement and failure to increase overall school performance are among the terms cited as grounds of termination among some states.

These state specifications for termination do not only apply to performance levels; they can be applied to a violation of any part of the charter law or agreement, such as fraud, failure to meet audit requirements, or failure to meet standards set for basic operations.

State Threshold

In addition to state specifications for termination, some states have set a threshold marking the lowest point where a school can perform before it is closed. Some states without a clearly communicated low-performance threshold have set other standards which specifically mark the lowest point of acceptable performance.

Setting a minimum threshold for performance for the automatic closure of failing schools may increase charter school accountability, and encourage high performance.

State-Provided Technical Assistance

Technical assistance to charter schools included leadership training or mentoring charter school leaders, or assistance with grant and application writing and other paperwork related to charter school operation.

In addition to holding charter schools accountable for high performance, several states offer technical assistance to ensure that charter school administrators understand how requirements are measured, and can be directed to resources to assist them with achieving performance goals, especially if they are at risk of closure due to failing to meet previously established standards.

These are clear displays of school accountability policies that help to ensure that parents have truly good schools from which to schools. Accountability relies not only on information for parents, but also consequences for schools that fail to educate students or use taxpayer dollars responsibly.

Charter Accountability

[1] The following states also require annual financial audits with their annual performance reports: Arkansas, Arizona, DC, Georgia, Hawaii, Oregon, Michigan, Texas, Utah

[2] Utah requires the most comprehensive technical assistance offerings, provided by the state charter school board which includes: assistance with the application and approval process for charter school authorization, locating private funding and support sources, and understanding and implementing charter requirements.

 

Filed under: Accountability,Charter Schools,School Choice — Tags: , — Katharine Carter @ 4:42 pm





February 7, 2017

School Improvement Grants: Why didn’t $7 billion change results for students?

Mathematica recently released a study of the federal program of Student Improvement Grants (SIG). Their findings? Schools receiving the extra funds showed no significant improvement over similar schools that did not participate. With a price tag of $7 billion (yes, with a “b”), this strikes many as a waste of taxpayer dollars. Interestingly, the study also found no evidence that the SIG schools actually had significantly higher per-pupil expenditures than similar schools that didn’t receive the grants, which may have contributed to the mediocre results.

SIG awarded up to $2 million annually to 1,400 schools, which was administered by states. The program began in the 2010-11 school year and continues through the end of the 2016-17 year. Starting in 2017-2018, the new Every Student Succeeds Act (ESSA) will allow states to use up to seven percent of their Title I allotments to improve the bottom five percent of schools. States may choose to dole out funds via formula or competitive grants, but districts are the ones responsible for using evidence-based practices to improve schools.

Under the old SIG rules, the federal government required schools to choose from one of these four turnaround models:

SIG 1

The new report analyzed transformation, turnaround, and restart models, and found no statistically significant effects for any of them. The authors did find positive, but not statistically significant, effects on math and reading scores for schools receiving the grant, but lower high school graduation rates. Critics of the new report have noted that the mathematical model chosen was not sensitive enough to detect small effects. The authors did find mixed effects each year, which many studies would have the power to find as significant, but due to the design, these remain insignificant. To give perspective of the magnitude of these effects, the effect of decreasing elementary class sizes by seven students is about 0.2 standard deviations; the effect of urban charter schools compared to their neighborhood schools after one year is 0.01 in math and -0.01 in reading (0.15 and 0.10 after four years). According to the Mathematica study, the results of SIG in 2012-2013 were 0.01 standard deviations in math and 0.08 standard deviations in reading, with a drop of in the graduation rate (note that SIG had a positive impact on the graduation rate in 2011-2012, which suggests that these results are not statistically significant, or could be zero). Not enough to conclude a positive effect, for sure, but not nothing, either.

 

SIG3

I’ll offer a couple of my own thoughts (based on research, of course) on why SIG didn’t have the success that was hoped for:

1. The authors found no evidence that the grant funds actually increased per-pupil spending. In government-speak, the funds may have supplanted other funding streams instead of supplementing them, even though the law states that federal funds are supposed to supplement other funds spent. They found that SIG schools spent about $245 more per student than similar non-SIG schools in 2011-2012, and only $100 more in 2012-2013 (again the results are not statistically significant, meaning that we can’t confidently say that the difference isn’t zero). Recent studies have shown that spending makes a difference in education, so this may help explain why we didn’t see a difference here.

2. Students in many priority schools (the bottom five percent of schools), which are the ones that qualified for SIG grants, may have had the option to transfer to higher-performing schools. While the report doesn’t address this, it seems that students with more involved parents and better academic achievement may have been more likely to utilize this offer, thus lowering the average scores of the schools they left behind. Students perform better when surrounded with higher-performing peers, which means that the lack of overall effect could have been influenced by the loss of higher achieving students.

3. Schools receiving SIG grants were high-poverty and high-minority. The average rate of students eligible for free-and-reduced price (FRL) lunches in the study group was 83 percent, with non-white students making up 91 percent of the school populations (as compared with the overall school population being about 50 percent FRL-eligible and 50 percent non-white). While the resources allocated through SIG to these schools should have made spending more equitable, schools may have still struggled with recruiting and retaining experienced, qualified teachers, which is often a challenge for high-poverty, high-minority schools. Research is clear that integrated schools have better outcomes for students than segregated schools. Yet, the reform strategies used under SIG (replacing school staff and/or converting to a charter school) did little to improve school integration.

Hopefully, states and districts will learn from these lessons and use school reforms that fundamentally change the practices of the school, not just a few personnel: increased funding, school integration, changes in instructional practices, meaningful teacher/principal mentoring and development, and/or wrap-around services for students in poverty or who have experienced trauma.






September 28, 2016

How do we measure the immeasurable— and should we?

We address what we assess. I’ve never cared so much about how far I walked until I bought a Fitbit and saw that my friends apparently walk 15 miles a day.  The same is true of schools.

Under No Child Left Behind (NCLB), we began assessing our students’ math, reading, and science abilities, and test scores improved.  While some of that growth may have been due to teachers teaching to the test or students adapting to standardized assessments, we should still acknowledge that having stronger data about achievement gaps has helped us build the argument for greater equity in education.

The Every Student Succeeds Act (ESSA) adds a new, non-academic factor to school accountability in response to the over-emphasis on tested subjects that many schools experienced under NCLB.  States have to determine what their accountability plan will include, and policy wonks are chiming in with research and cautionary tales.  It seems that we can all agree that the non-academic factor should be equitable (not favoring particular student groups), mutable (able to be changed), measurable (we have to be able to put some sort of ranking or number on it), and important to student growth and learning (or else, who cares?).  So far, I haven’t heard any consensus come out of the field on what this could look like.

SEL

The reality is that states may even want to consider testing out several different variables to see what the data tells them.  The non-academic variable could be minimally weighted until states are sure that their data is reliable, both ensuring that schools aren’t penalized for faulty data and that schools don’t try to game the new system.  States may also choose to use multiple indicators to ensure that pressure isn’t exerted on one lone factor.  States also have to keep in mind that children develop at different ages.  While chronic absenteeism is a problem for students of all ages, first-graders may differ in their abilities to self-regulate their emotions, based on gender and age.

A group of CORE districts in California have been testing a “dashboard” of metrics for several years, and are offering their strategy to the entire state, as documented by Stanford’s Learning Policy Institute.  Forty percent of a school’s rating is based on social and emotional learning indicators, including measures of social-emotional skills; suspension/expulsion rates; chronic absenteeism; culture/climate surveys from students, staff, and parents; and English learner re-designation rates.  The other 60% is based on academic performance and growth.

The reality is that our students need more than just math and reading.  They need to learn how to interact with others who are different from themselves.  They need to be able to creatively problem solve.  They need to think critically about the world around them.  Good teachers have been teaching their students these skills for decades; now we just have to make sure that all students have these enriching opportunities.

Filed under: Accountability,CPE,ESSA — Tags: — Chandi Wagner @ 8:00 am





Older Posts »
RSS Feed