President Obama talks to the media next to Secretary of Education Arne Duncan during a meeting with the Council of the Great City Schools Leadership to discuss “efforts to strengthen educational opportunities for students in city schools” at the White House in Washington March 16, 2015. (REUTERS/Yuri Gripas)

Sheri G. Lederman has been teaching for more than 15 years as a fourth-grade teacher  in New York’s Great Neck Public School district to students who routinely outperform state averages on math and English standardized tests.As I wrote in this post last year, she is a highly regarded educator, according to district Superintendent Thomas Dolan. Yet, in 2013-14, Lederman’s overall evaluation dropped from “highly effective” to “effective” because in one part of the assessment — which is based on student standardized test scores — she received only one out of 20 points, rendering her “ineffective” in that category. She is suing state officials over the method of using test scores to evaluate teachers in an action that could affect teacher evaluation systems in her state — and possibly beyond. New York officials have argued that she has no standing to sue because she hasn’t lost her job. She, of course, argues otherwise. A judge will eventually decide.

That method is known as value-added modeling, or VAM. It purports to be able to predict through a complicated computer model how students with similar characteristics are supposed to perform on the exams — and how much growth they are supposed to show over time — and then rate teachers on how much their students compare to the theoretical students. The complicated VAM formulas are said by supporters to be able to tease out all other factors that influence how well a student does on a test — hunger, living in a violent community, an ear ache, anything. Critics say it is statistically impossible to do that.

Testing experts have for years now been warning school reformers that efforts to evaluate teachers using VAM are not reliable or valid — and new research has come out recently backing up that view. The American Statistical Association, for example,  said in report slamming the use of VAM for teacher evaluation:

*VAMs are generally based on standardized test scores and do not directly measure potential teacher contributions toward other student outcomes.

*VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.

But reformers, including Education Secretary Arne Duncan, have embraced the method as a “data-driven” evaluation solution to teacher assessment. Last May, after the American Statistical Association’s report came out, I asked the Education Department if evidence had swayed Duncan’s views on VAM. Apparently not, as you can see here.

[Arne Duncan’s response to new report slamming teacher evaluation he favors]

Here is a letter just sent to Duncan by Rep.  Steve Israel _D-N.Y.) about Lederman and VAM, with six important questions about teacher evaluation. If Duncan responds to Israel, I’ll share the answers.


for duncan1