I don’t spend much time debunking our most powerful educational fad: value-added assessments to rate teachers. My colleague Valerie Strauss eviscerates value-added several times a week on her Answer Sheet blog with the verve of a samurai, so who needs me?
Unfortunately, value-added is still growing in every corner of our nation, including D.C. schools, despite all that torn flesh and missing pieces. It’s like those monsters lumbering through this year’s action films. We’ve got to stop them! Let me fling my small, aged body in their way with the best argument against value-added I have seen in some time.
It comes from education analyst and teacher trainer Grant Wiggins and his “Granted, but . . .” blog. He starts with the reasons many people, including him and me, like the idea of value-added. Why not rate teachers by how much their students improve over time? In theory, this allows us to judge teachers in low- and high-income schools fairly, instead of declaring, as we tend to do, that the teachers in rich neighborhoods are better than those in poor neighborhoods because their students’ test scores are higher.
“I have seen this sham firsthand over many years,” Wiggins writes. “Lots of so-called good N.J. and N.Y. suburban districts are truly awful when you look firsthand (as I have for three decades) at the pedagogy, assignments and local assessments; but those kids outscore the kids from Trenton and New York City, even though both city systems have a number of outstanding schools and teachers.”
Money makes a big difference. “For every $10,000 increase in family income, SAT scores rise approximately 15 points,” he said.
Also, Wiggins wrote, valid research on value-added exposes “hidden truths,” such as “it IS true that models accurately predict over a three-year period, performance at the extremes. Thus, the really effective teachers stay so and the really ineffective ones are really ineffective.”
Schools with high test scores discover through value-added analysis that they need more than that. One outstanding prep school, Wiggins said, gave a professionally designed test of critical thinking to freshmen and seniors. There was no improvement. Similar results have come from colleges giving the Collegiate Learning Assessment of analytical skills, given to freshmen and seniors.
Our mistake was thinking this valuable long-term research tool would work as a one-year teacher rating system. “It becomes like a sick game of telephone: What starts out as a reasonable idea, when whispered down the line to people who don’t really get the details — or don’t want to get them — becomes an abomination,” Wiggins wrote. “By looking at individual teachers, over only one year (instead of the minimum three years as the psychometricians and VAM [valued-added model] designers stress), we now demand more from the tests than can be obtained with sufficient precision.”
New value-added assessments in the District, New York and elsewhere carry a whiff of Stalinist economic planning: secretive measures immune to review or logic. Wiggins said we have re-invented “the Russian wheat quotas of the 1950s. It didn’t work then and it won’t work now.”
What should we do instead? He suggests we look at how we create success in sports. Many great classroom teachers have told me the same thing. Follow the methods of great coaches, Wiggins said: “Utterly transparent and valid measures, timely and frequent results, the ability to challenge judgments made, many diverse measurements over time, teacher-coach ownership of the rules and systems, and tiered leagues in which we have reasonable expectation and good incentive to make genuine improvement over time.”
I would add one more athletic device: working as a team. That is the way the best schools I know operate. They focus on how well the whole school improves rather than trying to rate precisely how much each teacher’s students improve each year.
To read previous columns, go to washingtonpost.com/jaymathews.