You can be certain that members of the American Statistical Association, the largest organization in the United States representing statisticians and related professionals, know a thing or two about data and measurement. That makes the statement that the association just issued very important for school reform.
The ASA just slammed the high-stakes “value-added method” (VAM) of evaluating teachers that has been increasingly embraced in states as part of school-reform efforts. VAM purports to be able to take student standardized test scores and measure the “value” a teacher adds to student learning through complicated formulas that can supposedly factor out all of the other influences and emerge with a valid assessment of how effective a particular teacher has been.
These formulas can’t actually do this with sufficient reliability and validity, but school reformers have pushed this approach and now most states use VAM as part of teacher evaluations. Because math and English test scores are available, reformers have devised bizarre implementation methods in which teachers are assessed on the test scores of students they don’t have or subjects they don’t teach. When Michelle Rhee was chancellor of D.C. public schools (2007-10), she was so enamored with using student test scores to evaluate adults that she implemented a system in which all adults in a school building, including the custodians, were in part evaluated by test scores.
Assessment experts have been saying for years that this is an unfair way to evaluate anybody, especially for high-stakes purposes such as pay, employment status, tenure or even the very survival of a school. But reformers went ahead anyway on the advice of some economists who have embraced the method (though many other economists have panned it). Now the statisticians have come out with recommendations for the use of VAM for teachers, principals and schools that school reformers should — but most likely won’t — take to heart.
Here’s part of what they said:
*VAMs are generally based on standardized test scores and do not directly measure potential teacher contributions toward other student outcomes.
*VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.
The entire statement is below.
Some economists have gone so far as to say that higher VAM scores for teachers lead to more economic success for their students later in life. Work published by the National Bureau of Economic Research, done by authors Raj Chetty, John N. Friedman and Jonah E. Rockoff, has made that claim, though there are some big problems with their research, according to an analysis of their latest study published by the National Education Policy Center at the University of Colorado Boulder. The analysis finds a number of key problems with the report making the link between VAM of teachers and financial success of students, including the fact that their own results show that VAM calculation for teachers is unreliable.
You can read the analysis below, after the American Statistical Association’s statement.
The evidence against VAM is at this point overwhelming. The refusal of school reformers to acknowledge it is outrageous.
And here’s the National Center for Education Policy paper: