The Cleveland Plain Dealer just published the “value-added” ratings of teachers that purport to show how effective a teacher is or isn’t. The Los Angeles Times did something similar in 2010 and 2011, and so did The New York Times in 2012.  In every case the newspapers came under heavy criticism.

Why?

Value-added ratings are derived by taking the standardized test scores of students and plugging them into a complicated formula that can supposedly determine how much “value” a teacher adds to a student’s achievement by taking into account a varying number of factors. How complicated? Here’s an example, taken from the website of the D.C. Public Schools:

First, we calculate how a teacher’s students are likely to perform, on average, on our standardized assessment (the DC CAS) given their previous year’s scores and other relevant information. We then compare that likely score with the students’ actual average score. Teachers with high IVA scores are those whose students’ actual performance exceeds their likely performance.

 

These ratings are increasingly being used by states to assess teachers — with support from the Obama administration — sometimes assigning the scores a big percentage of a teacher’s evaluation. It may sound like it makes sense, but testing experts say it is a bad idea to use these ratings for any high-stakes purpose. Why? According to a recent paper entitled “Getting Teacher Evaluation Right: A Background Paper for Policy Makers” by Stanford University’s Linda Darling-Hammond and three other researchers:

 

1) Value-added models of teacher effectiveness are highly unstable. Teachers’ ratings differ substantially from class to class and from year to year, as well as from one test to another.

 

2) Teachers’ value-added ratings are significantly affected by differences in the students who are assigned to them, even when models try to control for prior achievement and student demographic variables. In particular, teachers with large numbers of new English learners and others with special needs have been found to show lower gains than the same teachers who are teaching other students.

 

3) Value-added ratings cannot disentangle the many influences on student progress. These include home, school and student factors that influence student learning gains and that matter more than the individual teacher in explaining changes in test scores.

 

Furthermore, many teachers are being rated through value-added measures that are based on the test results of students they haven’t even had. That may sound ridiculous but it’s true. In fact, Florida just passed a new law saying that teachers can no longer be evaluated on the basis of scores from students they never taught. The law was needed because that was happening to teachers in the Sunshine State. But Florida isn’t the only place it is happening.

Still, the Plain Dealer, in collaboration with StateImpact Ohio, a collaboration of Ohio public radio stations, published the ratings, with teachers’ names.The ratings were published along with a series of stories on Ohio’s controversial value-added teacher evaluation system.

Knowing that value-added methods are controversial, the Plain Dealer’s editors published an explanation of their decision to publish the ratings by name, which you can read here. The piece says in part:

Plain Dealer Assistant Managing Editor Chris Quinn said there are several reasons to make the ratings available.

 

“One is that state lawmakers created the value-added system to come up with a better way to assess teachers, to give the residents of the state better accountability,” he said. “Another is that tax dollars are used to compile the ratings, meaning the people of Ohio have paid for this.

 

“Finally, it seems like common sense,” Quinn continued. “Any parent sending a child off to school wants to know everything possible about what is ahead for that child. If public information exists about the quality of a teacher, who are we to deny that information to the parent?”

 

There are so many problems with value-added measures that the results are far too questionable to be used for high-stakes decisions — or for publishing in a newspaper as if they actually have great meaning. The very act of publishing — without any information about how individual scores could be skewed — imbues the ratings with more validity than they deserve.

In fact the editorial writers at the Plain Dealer wrote a piece about the ratings that said in part:

Reporting by The Plain Dealer and StateImpact Ohio strongly suggests that value-added scoring favors affluent districts where students already perform well, undercutting the original premise that it could track student achievement no matter the income level.

 

Nor can value-added measure a dedicated teacher who turns a troublemaker into a scholar or a mousy kid into a confident debater. And value-added scores may not paint a clear picture of the work of teachers in urban school districts where children often have unique learning challenges, although the reasons for this apparent deficiency in the rating system remains unclear.

 

If that’s so, then why were the ratings published?

The Los Angeles Times went even further when it published value-added ratings of teachers; it created its own formula for rating teachers rather than simply publish district ratings. In 2011, Los Angeles Unified School District Superintendent John Deasy  asked the paper not to publish its ratings because the district had calculated its own version for internal purposes, using a different model, and he was worried the public could get confused by different results. The New York Times and the Cleveland Plain Dealer  published district value-added results.

Researchers and educators around the country have increasingly  been fighting the use of value-added models for teacher and principal evaluation. They know that there are other effective ways to evaluate teachers without using student test scores in a high-stakes fashion. A number of high-achieving school districts, such as Montgomery County Public Schools in Maryland and Fairfax County Public Schools in Virginia, do it.

Why policymakers insist on using data that isn’t reliable or valid — and why newspapers keep publishing this data — are among the big mysteries of the modern school-reform era.