In D.C. teacher assessments, details make a difference
Monday, January 4, 2010
I am still receiving e-mails about my Nov. 23 column on Dan Goldfarb, the first teacher to share with me the results of an evaluation under the new D.C. teacher assessment plan, IMPACT.
Goldfarb was not happy with his score, 2.3 out of a possible 4 points. He said the rules forced his evaluator to focus on trivia, such as whether he had been -- to quote the IMPACT guidelines -- "affirming (verbally or in writing) student effort or the connection between hard work and achievement." He said the evaluator told his principal of his complaints about the program and about D.C. Schools Chancellor Michelle A. Rhee, violating confidentiality.
Goldfarb had legitimate gripes. But his evaluation was a tiny sample of this innovative attempt to rate teachers. When I sought evaluations from teachers not as opposed to IMPACT, several said they would send theirs, but so far only one has.
That evaluation differed from Goldfarb's in intriguing ways. The score was almost perfect, 3.92 out of 4. The analysis, however, seemed somewhat out of sync with the thinking behind the program.
The evaluation of John F. Mahoney was done on Nov. 2. He teaches math at the same school where Goldfarb teaches history -- Benjamin Banneker Academic High School. The fact that Goldfarb has been at a school that good for several years indicates he is considered a good teacher. Mahoney's reputation is more obvious. He is one of the most decorated educators in the country. He was selected to the National Teachers Hall of Fame and holds many awards, including the Agnes Meyer Outstanding Teacher Award given by this newspaper. The Post reported in 2001 that his move from the private Sidwell Friends School to Banneker was an effort "to help more young people with fewer opportunities."
The evaluator gave Mahoney perfect scores on 12 of the 13 categories. Sometimes the reasons were clear. On probing for higher understanding with a ninth-grade class, the evaluator said "Mr. Mahoney used an effective questioning technique, for example: How can you tell this is correct? What would be other points on the line?"
In checking for understanding, Mahoney "asked clarifying questions and had students present their problems to demonstrate understanding. At one point, he even reminded a few to sit up straight and re-focus," the evaluator wrote.
Mahoney's only flaw was in reinforcing positive behavior, for which he received only 2 points. "This score could be improved if there was reinforcement at strategic times simply by saying 'thank you (student) for _____.' The students of ay [sic] age appreciate praise. With these students it would be an effective means to keep them on task and increase engagement," the evaluator wrote.
That's fine. Mahoney deserves his high marks. But parts of the evaluation were vague. On multiple learning styles, the report said: "Mr. Mahoney attempted and effectively targeted three learning styles: visual, kinesthetic and interpersonal," without giving any examples.
The evaluator told me through Mahoney that she offered more detailed guidance in her post-evaluation conversation with him. If this process is to work, without too many arguments over who said what to whom, I think the evaluators should get the important points down on paper.
I hope more D.C. teachers will share, perhaps as comments to the online version of this column, how their evaluations are going. If the process is to stimulate thoughtful exchanges about pedagogy, it needs to provide as much detail on the best teachers as it does on those who need to improve. Just as they do with their students, teachers like Mahoney expect to be evaluated, not admired.
Follow Jay's blog at http:/