The American Educational Research Association became the latest organization to caution against using value-added models — complex algorithms that attempt to measure a teacher’s impact on student test scores — to evaluate teachers and principals.
A growing number of states have begun using value-added models, or VAMs, to judge and in some cases fire teachers. But there are still a lot of unanswered questions about how to ensure that such models provide valid, reliable and accurate information about teachers, AERA said in a statement released Wednesday.
AERA cautioned against using these formulas for “high-stakes decisions” about educators.
“Many states and districts have incorporated VAM in a comprehensive system to evaluate teachers, principals, and educator preparation programs,” the statement says. “There are considerable risks of misclassification and misinterpretation in the use of VAM to inform these evaluations.”
The AERA listed eight technical requirements that should be met before the models can be considered valid and said that generally speaking, it isn’t possible for teacher and principal evaluation systems to meet those requirements.
The National Research Council, the American Statistical Association and the Rand Corporation have previously cautioned against using value-added scores to make personnel decisions.
VAMs have also drawn the ire of teachers unions, whose leaders call the algorithms arbitrary and unfair. In some states, for example, teachers are evaluated according to the test scores of students they don’t teach. A number of lawsuits challenging VAM-based teacher evaluations are pending in courts nationwide.