[Updated with details and link to Deasey letter to the Times.]
It’s deja vu all over again with the Los Angeles Times and its value-added scores that supposedly tell us how effective are the teachers in the nation’s second-largest school system.
The newspaper has printed its new ratings of elementary school teachers in the Los Angeles Unified School District based on how well students did on standardized tests. The idea is to use a formula that the newspaper had devised to assess the “value” a teacher added to a student’s achievement.
The newspaper says its project takes into account the complexities of measuring teacher performance. But it essentially ignores some obvious points:
*Teachers aren’t the only factor that go into how well a student does on a test;
*The tests aren’t devised to evaluate teachers;
*There are lots of questions about how well the tests measure real student learning;
*Lots of experts say the whole value-added enterprise is not reliable and valid as a high-stakes fashion; and
*See this post by a prominent mathematician about why value-added is suspect for the purposes of evaluating teachers.
I’d say that using a value-added score to label teachers effective or ineffective -- even in their ability to raise test scores -- is high-stakes.
This time the newpaper published value-added ratings for about 11,500 third- through fifth-grade teachers. That’s nearly twice the number of teachers rated in the Times’ first value-added outing last August, and this time, the scores were calculated and displayed in a different manner.
Why? “In the interest of greater clarity and accuracy,” the paper said.
Isn’t it good to know that this new information is more accurate than the first?
The paper’s story on Saturday announcing that it was publishing its new data on Sunday noted that it was providing the “only publication of such teacher performance data in the nation” as if it were doing something bold rather than injecting a newspaper’s editorial processes into a highly controversial enterprise.
It also noted that the district’s superintendent, John E. Deasy, and others had asked the paper not to publish its own value-added teacher ratings because the district had already calculated its own version for internal purposes, using a different model.
The public might get confused, the dissenters noted, because the results might be different.
“The data that the Times would release on individual teachers is not the same data that the teachers will be receiving from LAUSD,” said the letter, obtained by education writer Alexander Russo and posted on his blog This Week in Education. “This is very likely to create confusion for many educators and parents.” [See the whole letter here.]
Well, they got that right. In fact, the Times published a comparison of results from no less than four value-added models for each teacher in its database. The results? “On average, the results are very similar but, in specific cases, they can vary sharply,” the newspaper says.
By the way, Russo noted that the Times refused to give him the letter, saying it was private (unlike the personal teacher ratings it published), but he got it from another source.
The national obsession with standardized test scores in public education to grade schools, students and, increasingly, teachers is getting worse. But, hey, why worry? It’s only the future of public education at stake.
[UPDATE: This post has been updated with a link to the letter the Los Angeles Unified School District superintendent sent to the Los Angeles Times asking the paper not to publish its value-added teacher ratings.]
Follow The Answer Sheet every day by bookmarking http://www.washingtonpost.com/blogs/answer-sheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page. Bookmark it!