I used to speak at professional-development sessions for teachers, but I eventually realized I was wasting their time. Like most professional-development presentations, my speeches were not integrated with a research-tested approach to improve teaching. That meant whatever I said was unlikely to help them much, if at all.
My embarrassment has been reawakened by a new study delving deep into the uselessness of professional development. The study by the teacher-training and research group TNTP, titled “The Mirage,” reveals that teachers who are improving have the same professional-development experiences as those who aren’t. What they have learned is not having the effect it should.
Many teachers I know have long complained about professional development. TNTP found in its survey of teachers that “only about 40 percent reported that most of their professional-development activities were a good use of their time.” The study looked at three unidentified districts with a total of 400,000 students, 69 percent of them low-income — places most needing improvement in achievement.
The largest 50 U.S. school districts — which includes five in the Washington region — spend at least $8 billion a year on these often-ineffective teacher-training activities, the study said. The 10,507 teachers surveyed by TNTP said they spent on average about 19 full school days a year, about 10 percent of the total available time, on such activities. Yet “no type, amount or combination of development activities appears more likely than any other to help teachers improve substantially,” the study said. Just about 30 percent of the teachers showed any improvement, as measured by classroom observers and student test scores.
Fifty percent of teachers who showed improvement said their professional development was “targeted to support my specific learning context,” but 48 percent of teachers who showed no improvement said the same thing. Forty-one percent of the former group said the “individual teacher is responsible for development.” So did 40 percent of the latter group.
The TNTP researchers said they found a few “consistent, small but statistically significant relationships associated with more teacher improvement.” For example, teachers who were improving were almost twice as likely to rate their own performance as the same as their formal evaluation, while those not improving were almost twice as likely to rate their performance as better than their evaluators said.
Miriam Greenberg, director of education at the Center for Education Policy Research at Harvard University, said in a recent commentary for the Education Week newspaper that teachers are helped by detailed feedback. I have heard this from D.C. teachers who encounter evaluators skilled in breaking down what they are doing right and wrong. But Greenberg said evaluators rarely get much training in doing that. A teacher in one of her studies complained that an evaluator gave her no more than a thumb’s up.
In the TNTP study, just 1 in 5 teachers said they often received follow-up support and tailored coaching opportunities. Only 1 in 10 reported frequent opportunities for practicing new skills. Most said that they wanted to observe other excellent teachers but that they did so less than twice a year.
The TNTP study reported on one charter school network that put much more emphasis on such activities and showed more teacher improvement, but it urged caution about drawing sweeping conclusions because the charter school sample was so small.
Teacher development at the moment is “built mostly on good intentions and false assumptions,” the study said. Teachers wanting to improve don’t have to listen to me anymore, but tens of thousands of professional developers still don’t know what will work better. To find out, they will need a lot more help than they are getting from researchers.