That method is known as value-added modeling, or VAM. It purports to be able to predict through a complicated computer model how students with similar characteristics are supposed to perform on the exams — and how much growth they are supposed to show over time — and then rate teachers on how much their students compare to the theoretical students. The complicated VAM formulas are said by supporters to be able to tease out all other factors that influence how well a student does on a test — hunger, living in a violent community, an ear ache, anything. Critics say it is statistically impossible to do that.
Testing experts have for years now been warning school reformers that efforts to evaluate teachers using VAM are not reliable or valid — and new research has come out recently backing up that view. The American Statistical Association, for example, said in report slamming the use of VAM for teacher evaluation:
*VAMs are generally based on standardized test scores and do not directly measure potential teacher contributions toward other student outcomes.*VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.
But reformers, including Education Secretary Arne Duncan, have embraced the method as a “data-driven” evaluation solution to teacher assessment. Last May, after the American Statistical Association’s report came out, I asked the Education Department if evidence had swayed Duncan’s views on VAM. Apparently not, as you can see here.
Here is a letter just sent to Duncan by Rep. Steve Israel _D-N.Y.) about Lederman and VAM, with six important questions about teacher evaluation. If Duncan responds to Israel, I’ll share the answers.