A new report says that the D.C. schools system, operating under reforms instituted by former chancellor Michelle Rhee, is holding onto its best teachers at nearly twice the rate as its lowest performers, though teacher turnover is still too common. But there are issues with the report that raise serious questions about its conclusions.
The report, according to this story by my colleague Emma Brown, was conducted by TNTP, formerly called The New Teacher Project, which Rhee founded before she ran the D.C. schools. (Incidentally, that Rhee started TNTP was not mentioned in this Washington Post editorial which praised the report, and, by extension, Rhee’s reforms. Rhee’s successor, Kaya Henderson, has continued the Rhee reform program.)
The report is actually a spinoff of a June report released by TNTP called “The Irreplaceables,” a term used to describe teachers who are “so successful they are nearly impossible to replace.” It looked at teachers in four large urban school districts and reported that “irreplaceable” teachers often leave the profession voluntarily. It then provided suggestions on how to keep them in the classroom.
William Mathis, the managing director of the National Education Policy Center and a former Vermont superintendent, points out that “a very scant technical appendix in the original report” reveals important problems with methodology that affects both reports (since both look at the same four unnamed school districts and the second report compares them to Washington D.C.):
1) Three different value-added measures are used in the various districts, but the results are all lumped together.
2) Most of the districts use only one year of value-added data. Even people who support value-added evaluation methods acknowledge that one year of results is not enough to be valid.
3) We know nothing about the reliability of teacher opinion scales administered to the teachers. The interesting facet is that only “20% to 30%” of the teachers had to respond to the survey before TNTP used it in the analysis. That’s a very low response rate for a captive-audience questionnaire. This raises significant sample bias questions.
4) There are no real data on the strength of the claimed relationships of value-added measures with the survey questions. We just don’t know.
So, Mathis says, “Since both core measures are suspect, it logically tells you that the correlation has to be down in the dirt. Nothing else is mathematically probable.”
It should be noted that any teacher evaluation based on value-added data is suspect, given that assessment experts have shown the method to be highly unreliable. The method purports to use fancy mathematically formulas to determine the “value” a teacher has added to a student’s performance on standardized test scores. The D.C. teacher evaluation system, IMPACT, uses value-added data; it used to be 50 percent of a teacher’s evaluation but that has now changed to 35 percent — at least in the subjects where standardized tests are administered.)
Meanwhile, researcher Matthew Di Carlo at the Shanker Blog wrote in this post that the report doesn’t really show how recent school reforms have affected teacher retention:
I can’t really talk too much about the effects of recent D.C. reforms on teacher retention. That’s because , despite a couple of TNTP’s conclusions, this analysis doesn’t tell us much of anything about the effects of recent DCPS policy changes.
One thing that critics of D.C. reforms point out is that the teacher retention rate is just 79%, which is lower than other similar districts experience. This new paper looks more deeply into that attrition rate and then suggests that there is finally something to cheer about in D.C. Though their achievement gap is as wide as ever, they have managed to find one statistic where they beat their neighboring districts: While the other districts retain about 88% of their ‘high performing’ teachers and about 85% of their ‘low performing’ teachers. D.C.’s 79% retention rate, when you break it down by ‘high’ and ‘low’ retains 88% of their ‘high performing’ teachers but just 45% of their ‘low performing’ teachers. The conclusion is that this is something to celebrate….
This statistic gets even less relevant when we consider the potential bias, which the paper admits several times, in the rating system. According to the paper, only 11% of teachers in high poverty schools were ‘high performing’ compared to 42% of teachers in low poverty schools. On the flip side, only 3% of teachers in low poverty schools were ‘low performing’ compared with 36% of teachers in high poverty schools. On page 2, they speculate this could reveal a flaw in the IMPACT model on which this entire study is based…
Rubenstein notes the irony that TNTP and Teach For America both “train many of the teachers who work at these high poverty schools, so this statistic that there are so few high-performing teachers at these schools (just 11%) is in stark contrast with their PR about how good the new teachers are.
All in all, this is a report that raises far more questions than it actually answers. Skepticism is in order.