The 2013 math and reading scores for the National Assessment of Educational Progress were recently released and there has been a lot of loud reaction — both triumphant and defeatist — about the results. If you listen to D.C. public schools officials, a bump up in scores proves the results show how brilliant their school reforms have been. Meanwhile, Education Secretary Arne Duncan said in a statement that national results “are reason for concern as much as optimism.
No, and not really.
NAEP is often called the nation’s report card because it is the only measure of student achievement given periodically to a sampling of students around the nation. It is seen by many as a high-quality test that tells us more about a student’s ability than state standardized tests — though it has many critics, too, some of whom say that the NAEP definition of “proficiency” is unnaturally high, and that the test cannot measure many of the qualities students must develop to be successful. (One study by a former acting director of the National Center for Education Statistics showed that most countries that participate in the international tests called TIMSS would not do well under NAEP’s definition of proficiency.)
D.C. test scores in fourth and eighth grades for math and reading jumped in 2013 over 2011, as my college Emma Brown explained in this story:
Eighth graders climbed six points in reading, to 248 on a 500-point scale; the national average was 266. That’s a big jump, but it remains true that only 17 percent of D.C. eighth-graders scored well enough to be considered proficient in reading, which the U.S. Education Department defines as “solid academic performance” on skills appropriate for that grade level. Nationally, 36 percent of eighth graders were deemed proficient readers on NAEP. In reading, fourth-grade proficiency increased from 19 percent in 2011 to 23 percent in 2013.
In math, eighth-graders posted a five-point gain, with the proportion of proficient students climbed from 17 percent to 19 percent; Fourth-grade math proficiency jumped from 22 percent to 28 percent.
D.C. Mayor Vincent Gray and Schools Chancellor Kaya Henderson and even Duncan were thrilled with the D.C. scores, saying that they prove that the corporate-influenced school reform agenda in the District started in 2007 by Michelle Rhee is working.
Actually, they have no basis to say that. For one thing, the tests scores released by NAEP have combined traditional public schools and public charter schools in the District, and the disaggregated information won’t be released until next month. (Some writers are insisting that private schools are included in the NAEP scores, which isn’t so, though NAEP does test some private school students as well.)
More important, though, is that D.C. public schools have been seeing a steady rise in NAEP scores since well before Rhee took over and started firing teachers, linking teacher performance to standardized test scores and taking other reform steps to make the system operate more like a business than a civic institution.
Guy Brandenberg, a retired math teacher, wrote on his blog:
First of all, the increases in some of the scores in DC (my home town) are a continuation of a trend that has been going on since about 2000. As a result of those increases, DC’s fourth grade math students, while still dead last in the nation, have nearly caught up with MISSISSIPPI, the lowest-scoring state in the US.
You will have to strain your imagination to see any huge differences between the trends pre-Rhee and post-Rhee. (She was installed after testing was over in 2007.)
And here’s a chart from Brandenburg’s blog:
In general, FairTest views NAEP scores as one reasonably sound tool to assess overall U.S. academic performance. NAEP exams are not based on high-stakes, so they are less susceptible to many forms of manipulation, such as teaching to the test. Their item quality is generally better than on most states’ tests. And, they offer a relatively consistent way to assess changes in knowledge mastery over time.
NAEP is not, however, the pure “gold standard test” that some proponents claim. Many analysts suggest that some states and cities may “stack the deck” by choosing which students take the test (it’s given to a sample, that is supposed to be selected by statistically rigorous techniques). NAEP’s so-called achievement levels have been labeled “bogus” by pretty much every independent body that has looked at them. And, many state comparisons based on NAEP scores fail to take account of the test’s margin of error.
Overall, FairTest believes that, taken together with other data sources, NAEP scores can provide a useful “big picture” overview of education in the U.S. As always, we support the use of multiple measures to make such judgements. That’s why we also mentioned ACT and SAT score trends as additional confirming data.
It should also be noted that despite the rise reported in the D.C. scores, this also remains true: While there was progress among black, white, Hispanic and low-income students, D.C. achievement gaps remain large, and in some cases, widened in the 2013 NAEP results. And, as Brown noted in her story:
Different children take the test each year, making it difficult to draw conclusions about what causes scores to rise or fall, especially amid demographic change.
One final problem with the way the D.C. scores were reported. The District of Columbia is not a state, but it was compared to states in the NAEP results. The Washington Post’s headline on Brown’s story (she doesn’t write the headlines) said: “D.C. posts significant gains on national test, outpacing nearly every state.” The District should never be compared to a state; it’s a false comparison.
So yes, the D.C. test scores went up. But no, we don’t know how much of that rise belongs to traditional public schools or charter schools, and there is no way to definitively credit reforms with the results.