One of the most controversial issues in public education today is the use of “value-added measures” to evaluate teachers and principals. What these measures, known as VAM, purportedly do is to calculate the “value” of a teacher in student achievement through complicated formulas that use student standardized test scores as a base. Assessment experts have repeatedly warned that VAM should not be used for any high-stakes decisions because the results are unreliable but that hasn’t stopped school reformers from VAM anyway in systems across the country, with support from the Obama administration.
This makes you wonder why the Education Department would release its new report titled “Do Disadvantaged Students Get Less Effective Teaching?” which is a synthesis of three former studies that used “value-added measures” to define effective teaching. As teacher and blogger Larry Ferlazzo notes in this post that the report is based on “discredited science.” In fact, the report itself notes some VAM limitations:
“Value added” is a teacher’s contribution to students’ learning gains. Because individual researchers have varied in their presentation of this evidence, it is challenging for practitioners to draw lessons from the data….
Value-added indicators, increasingly promoted by policy (for example, U.S. Department of Education 2012; Tennessee Department of Education 2013; Hillsborough County Public Schools 2011), do have limitations. Because they rely exclusively on student test scores as an outcome measure, they are not meant to capture all aspects of a teacher’s performance, and they can only be estimated for teachers whose students take standardized tests. They tell us about teachers’ average impact on their students’ test scores after accounting for students’ background and prior achievement. But value-added indicators assume that a teacher has the same impact on all of his or her students.
There may be differences in how teachers devote their time to different students within the classroom that are not captured by the studies we describe here. Also, there may be unmeasured influences, such as the sorting of students across classrooms, that value-added indicators fail to account for (Rothstein 2009). Despite their limitations, however, value-added indicators have been shown to predict teachers’ future performance (Kane and Staiger 2008; Kane et al. 2013)and long-term student outcomes (Chetty et al. 2011).
Actually, the last sentence is highly debatable. But there’s more to the report to question. In fact, Ferlazzo calls the reseachers’ conclusions “astounding.”
From his blog post:
Ferlazzo further wrote:
Let me get this straight.
“School reformers,” including Arne Duncan, are alienating millions of teachers and hurting countless students and their families over a teacher evaluation policy that — using their own prize methodology (ignorant that we may believe it to be) — affects 2 to 4 percent of the achievement gap?
Of course, and unfortunately, Duncan’s ignoring his own department’s research is no surprise, considering he’s doing the same by pushing merit pay even though his department announced last September that out of three approved studies of a New York performance pay program, one showed across the board negative effects on student achievement; another showed negative effects in some areas and no effect in others; and a third one showed no effect at all.
And that his same department has previously concluded that 90% of the elements that affect student test scores are outside the control of teachers.
If school reformers really believe that standardized test scores are such a great way to evaluate, you’d think they would find the conclusions of this report sobering. Don’t hold your breath.