I want to thank Monty Neill from FairTest for his response to my earlier post claiming that his organization misused the recent SAT results to argue NCLB has been a failure and our schools have been on the decline. Dr. Neill makes some important points but the bottom line is that SAT scores do not provide proof our schools have been on the decline since NCLB. However, such discussions about the proper use of data like SAT results are important for policymakers, educators, the media, and the general public so they can better understand where our public schools really stand so we can get our schools to where we all want them to be.
On that note, I do want to point out that I would have made the same argument if SAT scores had increased—as they did the first four years following the passage of NCLB– and proponents had claimed NCLB a success due to the results. As I stated in my previous post, no matter whether SAT scores have increased or decreased over the past decade it is difficult if not impossible to connect such changes in SAT scores to NCLB.
On the other hand, Dr. Neill makes some seemingly strong arguments to the contrary. Namely, that my claim that scores could be declining due to an increasing number of disadvantaged students taking the SAT doesn’t hold water because scores for almost all groups of students are declining and achievement gaps are growing. Dr. Neill would have a valid argument if SAT results were representative of all high school students but as I stated in my previous post this is far from the case.
Since SAT test takers are not nationally representative of all high school students results cannot be used to make any valid inferences about the quality of our schools.
- To fairly compare the college readiness of the Class of 2012 to the Class of 2006 the SAT would have to be given to a nationally representative sample of students entering high school four years earlier so to include those students who eventually dropout.
- However, the SAT is typically only taken by those students who in most cases graduate high school and expect to go onto a four-year college.
- If more low performing students go on to graduate high school expecting to go to a four-year college, then SAT scores would be expected to decline in the short-term.
- Even though scores decreased for most student groups it cannot be discounted that changes in the type of students taking the SAT has had a negative impact on overall scores as well as scores for individual student groups.
- For example, if there had been a large increase in the number of low performing students taking the SAT from 2006 to 2012 and the increase was proportional to each student group then overall scores would decline as well as the scores for individual student groups.
- This would happen if policies were put in place to increase the number of students taking the SAT who had traditionally not taken it because they didn’t see college as an option. Such policies were put in place in some states like Maine and a number of districts during this time period.
Dr. Neill also took exception to my criticism that FairTest only considered SAT scores. He rightly points out that FairTest has a number of reports and articles using other data to back up their claim. However, FairTest doesn’t provide this additional information in their press release. Yes, the press release was about the SAT results but then they should have put SAT results in the proper context. If FairTest had stated in their press release that SAT results show a similar pattern as other indicators that showed our education system in decline since NCLB that would be more acceptable. But to say “Continuing SAT Decline Shows Failure of Test-Driven, K-12 Schools; Average Score Plunged 20 Points in Past Six Years; U.S. Education Headed in Wrong Direction” is not putting the SAT in context to support previous research. The headline is clearly saying SAT scores by themselves prove our schools have been in decline and that is what they wanted to media to report on even though such claims cannot be made based on SAT scores alone.
This brings me to my last point. Dr. Neill points out that they reached a similar conclusion about the decline of our schools using NAEP scores which is a much more valid measure of our schools. But no matter the measure used, being able to answer the question “What impact has NCLB had on student outcomes?” is nearly impossible to answer. It is just too difficult to isolate the impact of NCLB. For one, some states like New York, California, Texas, and Florida to name a few already had fairly strong test-based accountability systems in place prior to NCLB while many other states had almost no accountability. Furthermore, states did not instantly implement NCLB upon passage in 2002. Some states like New York were almost instantly in compliance while it took a number of years for other states to fully implement the law. Even then states had significant latitude on setting their standards and how they assessed whether their students met those standards. Furthermore, there have been so many other changes during this time period that likely has affected student achievement it would be impossible to isolate NCLB from them. While the FairTest reports that use NAEP to evaluate the impact of NCLB on student achievement provide some evidence that NCLB has not had the impact it was designed to have, those reports still do not show NCLB has caused a decline in student achievement as FairTest claims.
While SAT scores can be useful in determining how well our schools are preparing students for success in college it has to be used in the proper context. Just because scores go up doesn’t necessarily mean more students are college ready or vice versa since the majority of high school students never take the SAT. SAT scores are useful when you know how many students took them and how those students differ from previous years. When it comes to evaluating our public schools there is no single number that shows whether our schools are heading in the right direction or not. That is why we need to examine multiple measures and use each of those measures in the proper context as we explain on our Data First site. – Jim Hull