Learn About: 21st Century | Charter Schools | Homework
Home / Edifier


The EDifier

November 13, 2017

ESSA growth vs. proficiency: a former teacher’s perspective

Right now, state education departments are working to try to come up with a plan that meets all the requirements of the Federal government’s Every Student Succeeds Act (ESSA). One area that has received a lot of attention in ESSA is the student accountability section and the required indicators that hold schools accountable for student learning.

The first indicator under ESSA is known as the academic achievement indicator and requires states to annually measure English/Language Arts (ELA) and Math proficiency using statewide assessments. To simplify, I would call this a proficiency indicator, where states use the information from state standardized tests to see if students are meeting grade level standards. When I was a 4th grade teacher, this information was incredibly useful for me. I needed to know what level my student’s were performing at in language arts and math so that I could scaffold lesson plans, create student groups and understand which students needed to do the most catching up. These scores helped me also talk to parents about where their child was performing in relation to where he/she should be performing as a 4th grader. However, a student’s proficiency score was only a part of the puzzle, which is where the second indicator under ESSA comes in to complete the picture.

The second ESSA accountability indicator is the academic progress indicator, which looks at the growth or progress that an individual student or subgroup of students has made in elementary and middle school. States have created different policies to measure this, but the general goal is to measure an individual student’s growth over a period of time.

When I was a teacher I also had a method of measuring this for each of my students. For example, in reading I would assess their starting reading level at the beginning of the year and then map out an individual plan for each student. Each student would have to grow between 6 or 8 reading levels, depending on where they started, with the overall goal of growing the equivalent of two grade levels. Some of my students did grow two grade levels, but they would still be below where they should be at that grade. For others the two-grade boost would put them way above the 4th grade reading level.  It is important for teachers and students to understand and celebrate their progress at multiple checkpoints throughout the year that are not in the form of state tests. In my classroom, this gave students a sense of purpose for their assignments because they wanted to meet the individual goal that we had set together. As a teacher, I also would constantly adjust assignments, homework, student pairs, etc. based on the new levels that students reached throughout the year.

For me, both proficiency and growth measures were crucial for the success of my students. The growth measure made learning real for students as they saw their reading levels steadily increase throughout the year. But I couldn’t rely on growth measures alone. The proficiency measure provided that benchmark to help me know what level fourth grade students should be able to perform at by the end of the year. Without this, I could not have identified the achievement gaps in my classroom and would also not have been able to communicate these to the parents of my students. After understanding the difference between growth and proficiency indicators and how to use the data from each to inform my instruction as a teacher, I do not think that it is a matter of one being more important than the other, but rather both working together to paint a more holistic picture of student learning.

data tracker picture

Filed under: Accountability,Assessments,CPE,Testing — Tags: , , , — Annie Hemphill @ 3:28 pm





January 11, 2013

Putting it all together

As Naomi wrote yesterday the results from the final report from the Gates Foundation’s MET study are not groundbreaking. A number of researchers–including yours truly–policymakers and advocates have been saying for years that the most accurate way to evaluate teachers is by employing multiple measures of teacher performance that also includes measures of student achievement.

Even so, a number of highly respected education policy and research experts such as Jack Jennings and Linda Darling-Hammond have argued that measures of student performance such as value-added measures are too unreliable to accurately evaluate a teacher’s true performance. However, such critiques assume that value-added measures are the sole measure of a teacher’s performance which magnifies its limitations. These and other critics of value-added typically claim that other measures of teacher performance such as teacher observations are more accurate and should be used to evaluate teachers in lieu of value-added.

However, the claim that observations are a more accurate tool in evaluating teachers turns out not to be true at all. This is where the MET study really gets interesting. MET researchers specifically examined teacher observations and found that there are more limitations to only using observations to evaluate teachers than to only use value-added measures. Specifically, they found that a teacher’s observation score differed significantly depending on who did the observing and which lessons were observed. As Jennings and Darling-Hammond point out researchers have long known that value-added scores fluctuate significantly from year to year and even from assessment to assessment as well.

What gets lost in the rhetoric is the fact both tools can be made more accurate. For example, value-added scores are more accurate when they are averaged over multiple years—a point critics often leave out. On the other hand, observations scores are more accurate when teachers are observed multiple times by multiple people. This goes to show that no measure is perfect but there are ways to make them more accurate. Most importantly, the MET study found that when these and other measures were used together they were an accurate predictor of how they would perform in the future. So those students who currently have a teacher who previously obtained high value-added and observations scores are more likely to make greater achievement gains than similar students who currently have a teacher who earned lower scores.

It is important to point out that even when using multiple measures some very good teachers will be identified as ineffective and vice versa. However, an evaluation system that is based on a combination of value-added, observations and other measures is much better than using any one of these measures alone. By using multiple measures to more accurately evaluate teachers, administrators and policymakers can make personnel decisions based on how it will likely impact student achievement. This is a great improvement over the current system that simply evaluates teachers based on their years of experience and the highest degree they have earned which the MET Study found is the least accurate way to evaluate teachers. – Jim Hull

Filed under: Growth Models,Teacher evaluation,teachers — Tags: , , — Jim Hull @ 3:57 pm





July 5, 2012

Poverty Not the Reason Teachers Labeled Ineffective

It’s no secret that most teachers are not fans of being evaluated based on student test scores.  This is true even though, according to the findings of an upcoming teacher survey from EdSector, the majority of teachers believe they should receive pay increases if they consistently receive outstanding evaluations from their principals . The survey also found that support for performance pay fell to less than a third of teachers if pay increases were based on student test scores. So teachers don’t mind being paid based on their performance but they don’t want their performance determined by test scores.

So why is this the case? Why are teachers so against being evaluated, even in part, based on how their students perform on standardized tests? Of course, there is no single answer but EdSector provides insights into the minds of many teachers. They highlight one teacher who said:

While she loved her job, if her pay depended on student test scores, she “would be crazy” to continue working in her school, which has a student body that is 90 percent low-income and speaks 18 different languages.

She says this because she, like many teachers, believes that standardized tests do not accurately measure the progress low-income and other traditionally disadvantaged students make during a given school year. Essentially, many teachers, just like the one highlighted, believe that their evaluations scores would suffer if they taught a class mostly made up of students from disadvantaged backgrounds.

I certainly don’t blame teachers for having such concerns. There have been many attempts to pay teachers based on their students’ performance that did penalize teachers for working with disadvantaged students.

However, times have changed and statisticians have developed tools to take on this problem. For example, value-added models have been designed to specifically isolate the impact a teacher has on their students no matter their students’ background or previous achievement. Value-added models are not perfect but they do a pretty good job of determining if a student would be better off with one specific teacher instead of another.

Despite the use of value-added models, teachers still believe they would essentially be penalized for having a class of disadvantaged students.  Even researchers who are critical of using value-added data to evaluate teachers believe teachers of disadvantaged students are penalized when they are evaluated based on value-added data.

However, findings from a recent report from the CALDER Center run counter to such criticism. The report found that a teacher’s evaluation score does not suffer if they teach more disadvantaged students. Specifically, CALDER found that teachers in North Carolina that transferred from a low-poverty school to a high-poverty school actually received higher value-added scores in the high-poverty school than they had in the low-poverty school.

Such finding provides strong evidence that teachers of disadvantaged students are not penalized if they are evaluated using high quality value-added data. Even so, as the Center’s report Building a Better Evaluation System argues, no teacher should be evaluated based only on student test scores – or any single measure for that matter. For a teacher evaluation system to yield both accurate results and provide information to help all teachers improve, multiple measures should be used, including student test scores. – Jim Hull






February 27, 2012

Evaluating principals, too

Principals are finally getting their turn in the spotlight. The Connecticut Mirror recently ran an article highlighting a proposal I’m seeing more and more: evaluating principals according to students’ and teacher’s progress, just as has been proposed for teachers.

As reported, the proposed Connecticut plan “calls for student performance and test results to count for 45 percent of a principal’s grade. The remaining parts will be linked to superintendent observations and surveys of parent, peer and school employees.” Whether teachers are improving on their own evaluations also counts.

I’ll be interested to see what happens with this proposal. As always, at the Center we think it’s important to have any evaluation, especially one that includes student scores, be made up of multiple measures of performance. Read our report Building a Better Evaluation System to find out why. But the principal’s role in the school is one that has been overlooked for too long. As an upcoming report from the Center will show, principals have a significant impact on schools and student achievement. I think the Connecticut proposal would show the same thing. — Rebecca St. Andrie

 






October 31, 2011

Merit pay revisited- Is Denver’s pay for performance a model plan?

Although it remains a controversial issue, merit pay has long since evolved from the days when test scores were the single factor in determining whether a teacher would get paid for performance.  Nowadays a number of school districts across the country have developed multi-pronged plans aimed at equitably rewarding teachers for their accomplishments.  Nonetheless, the question still remains: Is there actually a way to fairly reward a professional who deals with the advancement of human capital?  No plan is perfect, but one district might have come close.

In 2009, The Center took a look at merit pay and made mention of Denver’s ProComp Pay for Performance plan.  Now, a three year study, conducted by Dan Goldhaber and Joe Walch of the Center for Education Data and Research, has come out.  The study was conducted between the fall of 2006 and spring of 2010 on Denver’s ProComp plan. Denver Public Schools (DPS) requires all teachers who were hired in 2006 or later to be a part of the ProComp plan and gives veteran teachers the choice whether to opt in or not.  ProComp offers teachers four opportunities to receive bonuses, which include:

  1. Knowledge and Skills: Teachers may earn pay for completing one professional development unit per year (and can bank extra PDU’s), getting advanced  degrees and licenses, and can even receive tuition and student loan reimbursement (50 to 65 percent received this pay)
  2. Comprehensive Professional Evaluation: Based on principal evaluations, which are every 1 to 3 years (5 to 14 percent received this pay)
  3. Market Incentives: Aimed at teachers who work in hard-to-serve schools and/0r in hard-to-staff subject areas, as reviewed by school demographics and market supply (35 to 65 percent received this pay)
  4. Student Growth: Teachers set up student growth objectives, based on what they expect students to learn, which are approved by the principal (example: I expect x number of students to  exceed expectations in Reading on the Colorado Student Assessment Program (CSAP).) (70 to 80 percent received this pay)

The study suggests that the ProComp plan made teachers feel more supported and in turn, allowed them to more consistently meet their goals (Robles 2011).  In fact, between 2006 and 2010, 15 percent of the non-ProComp teachers even switched over to join the plan after seeing the positive results ProComp had on their schools and colleagues.  Not only has ProComp made the teaching profession more attractive, Goldhaber and Walch conclude that:

  • There were significant learning gains across grades and subjects;
  • The benefits of tracking data and evaluating educators spread from ProComp teachers to the entire district;
  • There was an expectation that the program would cause a negative atmosphere between team members but the opposite actually occurred and role models were bred;
  • ProComp teachers’ students had larger than expected gains on the state assessment.

Skeptics argue that these rewards focus more on classroom instruction than student test achievement and that ProComp is inconsistent with the value-added approach. Goldhaber and Walch point out that, “whether this is good or bad is clearly a normative question” but that “overall, ProComp has had a positive effect.”  They also suggest that states might want to consider investing in similar programs, especially for their Race to the Top objectives. Yesenia Robles of the Denver Post notes that ProComp has helped propel infrastructure reforms to change recruitment practices and enhance methods of data gathering.  She goes on to point out that the difference between non-ProComp and ProComp teachers’ student growth objectives are comparable to the difference between a first and second year teacher’s.   Her article, DPS Teacher-Pay System Likely Boosting Student Achievement, Study Finds, also points out that Denver Public Schools has retained 160 more teachers per year since 2006 and that 80 percent of all DPS teachers currently participate in the program.   Robles notes that, “The ProComp system is already in the process of changing with the implementation of the district’s evaluation-and-support system, known as LEAP, now being tested in 94 percent of DPS schools.”  Right now it is still too early to tell if ProComp can survive these alterations. 

ProComp is an even-handed, well-formed pay for performance plan that other districts can use as a model and will hopefully emulate.  The research shows that ProComp was not only received well by DPS teachers but most significantly, student success consistently progressed. –M. Newport

(To see whether similar pay for performance plans have been successful, check out this ECS report.)






Older Posts »
RSS Feed