Although the main purpose of the District's new teacher evaluation system is to rate teachers' effectiveness, officials are beginning to use the fresh troves of data it generates for other purposes, such as assessing administrators and determining which universities produce the best- or least-prepared teachers.
"There are hundreds of human capital questions you need to answer to effectively run a school district," said Jason Kamras, personnel chief for D.C. public schools and the main architect behind the evaluation system, called IMPACT. "And for the first time, we have really good data allowing us to answer those questions. There is a bigger picture we are now able to understand."
Across the country, education reformers have been pressing for more rigorous, quantifiable ways to evaluate teachers, and the District's new system is in the vanguard of that movement, even as unions and education experts question its merits.
Now in its second year, IMPACT uses five classroom observations to rate how effective a teacher is in nine standards - including explaining content clearly and engaging students - deemed essential to good teaching. Certain teachers are also judged on whether their students' test scores sufficiently improve - a metric known as "value-added." All of the numbers are crunched into a teacher's annual rating, ranging from ineffective to highly effective.
Last year, former D.C. schools chancellor Michelle A. Rhee fired 75 teachers who received poor IMPACT evaluations and offered bonuses to more than 600 top scorers.
A lesser-known result of such new systems is that they are generating mountains of data that school officials are starting to use to guide key decisions, aside from which teachers to fire or reward. For instance, by matching teachers' ratings to the universities they attended, officials are deciding which pipelines deliver the best, or worst, talent.
"Now I know the average score of each teacher from each university. Over the coming years, we will be having conversations with these institutions, saying, 'Here's how your people are performing,' " said Kamras, who declined to say which colleges were doing well or poorly. "We'll just stop taking graduates from institutions that aren't producing effective teachers."
Just as teachers are being held accountable for students' performance on tests, Kamras said, administrators will be held accountable for teachers' performance on IMPACT evaluations. Teacher ratings from one cluster of schools might be compared with those from another cluster to assess how a particular instructional superintendent is faring. Principals will be judged in part by the number of "highly effective" teachers they are able to retain from year to year. Instructional coaches will be held accountable for the ratings of the teachers they coach.
"If they're not improving, then what is the coach doing?" Kamras said.
Kamras said data from classroom observations will help address a long-standing complaint from teachers: that they do not receive the professional support necessary to improve their craft.
School officials responsible for professional development can sort through how teachers rank schoolwide or systemwide in each of the nine teaching standards, discern areas of weakness and better target support.
Critics of value-added evaluation models, who have objected to using the data to fire teachers, say that expanding their use is unwise at this point.
"The core problem with these data is the creation of incentives to narrow the curriculum," said Richard Rothstein, a research associate with the Economic Policy Institute and one of the authors of a recent report critical of value-added evaluations.
"For example, if principals are judged by how many 'highly effective' teachers they retain, and if 'highly effective' is defined exclusively by [test scores], then teachers skilled at narrow test preparation could be retained, while teachers less skilled at that but more skilled at developing critical reasoning skills could be let go."
Nathan Saunders, president of the Washington Teachers' Union, said IMPACT is too flawed to be the basis for decisions. He is particularly concerned, he said, that historically black colleges and universities, which send many teachers to the District, could wind up penalized because of IMPACT.
"It's never been piloted, never been tested," Saunders said. "And the conclusions made using IMPACT as a basis will be just as flawed as the instrument they rely upon."
Kamras said it's too soon to draw many broad conclusions based on the new evaluation system. But as the IMPACT database grows, he said, "these are the sorts of things we can mine."