Ratings madness: NO ‘highly effective’ elementary/middle-school teachers in Syracuse?

questionmark

The teacher ratings madness continues. In this piece,  Aaron Pallas, professor of sociology and education at Teachers College, Columbia University, asks and answers this question: Are there really no highly effective elementary or middle-school teachers in Syracuse? Pallas  writes the Sociological Eye on Education blog — where this post appeared — for The Hechinger Report, a nonprofit, non-partisan education-news outlet affiliated with the Hechinger Institute on Education and the Media.

 

By Aaron Pallas

What the heck happened in Syracuse?

About 10 days ago, Superintendent Sharon Contreras briefed the city’s Board of Education on the results of the first year of implementation of the “Annual Professional Performance Review” (APPR) plan, a fancy phrase for the new statewide teacher and principal evaluation system in New York. The APPR process sorts teachers into the categories of “highly effective,” “effective,” “developing” and “ineffective” based on state-approved measures of student learning “growth,” locally determined measures of student achievement, and principal and peer observations of teachers’ classroom practices.

The summary evaluations reported by Superintendent Contreras were striking: Just two percent of Syracuse teachers were rated highly effective, and an additional 58 percent were deemed effective. Seven percent were classified as ineffective, and 33 percent as developing, categories that suggest low levels of teaching performance, the need for teacher improvement plans, and the threat of eventual dismissal. Not a single elementary or middle-school teacher in the entire district was rated highly effective.

When the ratings were disaggregated into their three components, a distinct pattern emerged. Syracuse teachers were, on average, squarely in the effective range on the state growth scores, garnering an average of 11 out of the 20 points available in this category. And they were rated effective or highly effective by the principals and peers who observed them, averaging a score of 58 out of 60 on the professional practice observational measures. So far, so good. But on the school-wide measures of student achievement used to calculate the local measures, Syracuse teachers obtained an average score of 6 out of 20, and even a bit lower than that for teachers in elementary and middle schools.

How could this be? Is it plausible that teachers’ performance could be so discrepant across these three categories making up the summary teacher evaluation? To be sure, there are longstanding concerns that teachers have been rated exemplary while the performance of their students has languished. But in Syracuse, teacher performance on the statewide “objective” measures was in the effective range, whereas it was much, much lower on the local measures of achievement.

The root of the problem is in the APPR plan submitted by the Syracuse school district and approved by the New York State Education Department (NYSED). Syracuse proposed using changes in the percentage of students scoring in the four performance categories in a given elementary or middle school from 2012 to 2013 as a school-wide local measure. These school-wide measures of student achievement would reward teachers if the percentage of students scoring in level 1, the lowest performance category, decreased from 2012 to 2013, with a target rate of a decrease of 10 percentage points in ELA and math to denote the teachers in the school as highly effective. Similarly, teachers would receive a rating of highly effective if the percentage of students scoring in levels 3 and 4 in ELA and math were to increase by more than five percentage points. If the percentages of interest didn’t budge, all of the teachers in the school would be rated as ineffective.

What went unnoticed in Syracuse—and was not remarked upon by NYSED in its review and approval of the Syracuse plan—was that the state tests in 2012 and 2013 would be dramatically different in their levels of difficulty. The 2013 tests were aligned with Common Core curricular standards, whereas the 2012 tests were not. Long before the 2013 tests, and before the approval of Syracuse’s plan, State Education Commissioner John King, Jr. and other state officials were preparing educators and the public for the likelihood that student performance would plummet in the first year of testing aligned with the Common Core standards, which are more challenging than previous New York State learning standards.

That fewer students would be classified as proficient in 2013 than in 2012, therefore, came as no surprise. The idea that teachers would be judged ineffective or, at best, developing based on a wildly inappropriate metric requiring higher rates of proficiency in 2013 could have, and should have, been picked up in a substantive review of the Syracuse APPR proposal by NYSED.

But NYSED was reviewing hundreds of proposals under tight timelines, hell-bent on rushing these plans into effect without the benefit of a pilot period in which kinks could be worked out. There’s no evidence of careful, thoughtful and substantive review of the applications, leading to all kinds of craziness, including rating teachers on the basis of the performance of students they never taught. But you can bet that no plan was approved without a careful review of all of the compliance checklists in the state’s templates for plan submission.

I wonder how State Commissioner John King, Jr. would like it if his performance evaluation were based on the same criteria applied to teachers in Syracuse. The percentage-point increase in students statewide scoring at level 3 and 4 in ELA from 2012 to 2013? Well, that actually fell from 55 percent to 31 percent. The Commissioner gets a zero. The percentage-point increase in students scoring at level 3 and 4 in math? That fell from 65 percent to 31 percent. The Commissioner gets a zero. The percentage-point decrease in students statewide scoring at level 1 in ELA from 2012 to 2013? That actually increased from 10 percent to 32 percent. The Commissioner gets a zero. And the percentage-point decrease in students scoring at level 1 in math? That rose from eight percent to 33 percent. The Commissioner gets a zero.

Just for the heck of it, let’s also allow the Commissioner to score some points if the average teacher growth percentile across the state increased from 2012 to 2013. But because that’s constrained by definition to be 50 each year, there’s no growth there, either. Sorry, Commish! Another zero.

In sum, Commissioner King is ineffective.

Using these criteria to evaluate professional performance is as unfair to Commissioner King as it is to teachers in Syracuse.

After all, what’s sauce for the goose is sauce for the panderer.

 

Valerie Strauss covers education and runs The Answer Sheet blog.
Comments
Show Comments
Most Read Local
Next Story
Valerie Strauss · October 12, 2013