The truth is that teachers hardly live or die on the strength of their students’ test scores. They are measured on far more. Yet, the effort to reform teacher evaluations is so shrouded in myth that progress has largely ground to a halt. Still, it’s clear that some change is needed. In California, for example, where the percentage of students reaching proficiency on national assessments is significantly below the national norm, just 2.2 of the state’s more 250,000 teachers are fired each year, on average. But as long as the reform effort remains snared in the vice of hyperbole, sub-par learning will remain the story of our nation’s schools.
When student performance is incorporated as a part of the evaluation formula, teachers, and the unions that represent them, push back hard. Newly elected National Education Association President Lily Eskelsen Garcia has called such value-added models “the mark of the Devil.” But effective evaluation programs that peg teachers’ job security to students’ achievement aren’t the menace many insist. Here are the facts on teacher evaluations that opponents consistently overlook:
1. Test scores are not the primary measurement of teacher performance.
Most states in America have created teacher evaluation programs that consist of three parts: student test scores, observations by classroom supervisors, and local measurements of success (such as attendance, school graduation rates, or teacher assessments of what and how much students have learned).
Given the current conversation, one might assume that test scores would be the predominant factor in evaluating teachers. But test scores rarely count for more than 50 percent of a teacher’s evaluation. In Maryland, for example, teachers’ evaluations are 20 percent student test scores, 50 percent classroom observations, and 30 percent local metrics of success. In New York, the breakdown is 25 percent for test scores, 60 percent for classroom observation, and 15 percent for local metrics. West Virginia, it’s 15 percent, 80 percent, and 5 percent. And in Washington D.C., despite being criticized for being too heavily test-based, the teacher evaluation system is only 35 percent, 50 percent, and 15 percent.
2. New teacher evaluation systems have not caused large numbers of teachers to be identified as ineffective.
A landmark 2009 report by The New Teacher Project found that less than 1 percent of teachers evaluated in 12 districts in four states were rated ineffective. Not much has changed. For example, new evaluation systems in Tennessee and Michigan only identified 2 percent of teachers as ineffective. In Florida, it was only 3 percent. And in Indiana, when the first round of results were calculated, only 2 percent of teachers were labeled ineffective.
3. Most teachers don’t even have student test scores from which they can be evaluated.
As a part of the No Child Left Behind Act of 2001, schools across the country were required to administer math and reading exams in grades 3 to 8 and again at least once in high school. This means that the system cannot generate a score for a teacher who is not from one of those grades or subjects. How many teachers fall outside of groups? In Florida, about two-thirds. The same is true in Tennessee. This number appears consistent across states. Add to this the fact that numerous states are not using any test scores while their schools transition to the new Common Core standards and tests, so an even smaller number of teachers are being evaluated through their students’ test scores.
When one takes into account current student results, it’s hard to label a system that still only identifies 2-3 percent of teachers as ineffective as a “Mark of the Devil.” That said, today’s teacher evaluation programs can do a much better job of providing useful data on teacher performance for principals and other school leaders. Facts, not apocalyptic language, will make that happen.