Correction: An earlier version of this story misidentified an assistant principal at Seaton Elementary. Her name is Cynthia Robinson-Rivers. This version has been corrected.

D.C. Public Schools officials have changed how they evaluate principals in response to complaints that the previous system — which rated more than half of the city’s principals below “effective” — was unfair and too tightly hitched to student test scores.

With the old system, an otherwise strong principal would have been rated below effective if test scores at his or her school stagnated or declined in either math or reading. Now, test scores and other student achievement measures will account for 50 percent of a principal’s annual rating.

Officials also have scrapped, at least for this year, a plan to freeze the pay of principals rated
below effective. All principals ­reappoint­ed to their jobs next year will be eligible for annual raises, known as step increases.

“This was an issue of particular concern to our principals, so we decided to press pause,” said Jason Kamras, the school system’s chief of human capital. Kamras said the school system still believes in merit pay and will continue to award bonuses to principals rated “highly effective.”

The changes have drawn a range of reactions from principals, with some calling the ­changes mere tweaks and others praising the school system for seeking and listening to feedback.

“I am pleased with the outcome. We have a lot to be proud of,” said Atasha M. James, principal of Leckie Elementary in Southwest, who served on a task force that offered suggestions for changes. “The revisions address all of the issues raised by principals, and our input was taken seriously.”

The evaluations, based on a combination of supervisor observation and student achievement data, are the principals’ version of the IMPACT evaluation system that has been used to judge teachers in the city since 2009.

The 2012-13 school year was the first time that the evaluations were used to sort principals and assistant principals into performance categories: ineffective, developing, effective and highly effective.

Many principals said they were surprised and frustrated when they received their ratings by ­e-mail in September. System officials had failed to explain how the principals were going to be evaluated, they said, and the ratings were a shock. Half of the system’s principals were “developing” and 8 percent were “ineffective” in 2012-13. Far fewer of the system’s teachers — 23 percent — were rated below effective that year.

The ratings do not affect a principal’s job security, because all principals work on one-year contracts and can be dismissed for any reason at the end of a school year. But they do determine whether principals are seen as exemplary and offered additional leadership and mentorship opportunities, or whether they are seen as in need of improvement and greater scrutiny.

Kamras initially defended the principal evaluations. But he changed his tone as some principals and assistant principals erupted, with some threatening to leave the system and others speaking out against the evaluations in meetings with administrators.

“We made some mistakes,” Kamras wrote in an e-mail to The Washington Post in October. “We should have given our principals and [assistant principals] more information about the process earlier in the year. And we need to give even more thought to how to balance test scores and one’s leadership skill when evaluating principals.”

Kamras convened a task force of principals, assistant principals and instructional superintendents to suggest changes to the evaluation system. He said he and Schools Chancellor Kaya Henderson considered that feedback as they made revisions.

The evaluations combine a school’s progress toward five student achievement goals — including proficiency rates on standardized tests in math and reading — with supervisor observations of a principal’s performance in six ­areas of leadership, including retaining talented teachers, engaging families and setting a vision for the school’s instruction and culture.

Last year, those factors were combined according to a complicated blueprint that made it difficult to understand how much each component counted toward final ratings. Now the factors are combined with a simple pie chart: mid-year observations count for 20 percent; end-of-year observations 30 percent; and the five student achievement goals count for 10 percent each.

Supervisors will have discretion to raise a principal’s rating based on a school's individual circumstances or challenges.

One principal, who spoke on the condition of anonymity because of fear of reprisal, said the changes do not fundamentally alter the underlying emphasis on test scores. The revisions reflect “absolutely no new thinking,” the principal said.

But others said that the school system was sincere about listening to concerns from the task force and that the new system is far more clear.

“I appreciate being able to calculate and understand my score more easily based on these changes,” said Cynthia Robinson-Rivers, assistant principal at Seaton Elementary in Northwest. “DCPS leadership was very responsive to the concerns school leaders voiced about their evaluations.”

Kamras said that the school system is exploring more revisions for the 2014-15 school year, including the potential for including feedback from teachers in principal evaluations. The system also is considering multi-year contracts for principals, he said, a move that many D.C. educators and activists have advocated as a way to stem high turnover and give school leaders the time they need to make and sustain needed changes.