It takes some degree of chutzpah to evaluate teacher preparation programs with data said to be “in-depth” and “comprehensively collected” and then bury in small type the fact that some of the data isn’t actually all that trustworthy.
That’s what the New York City Department of Education did with the newly released reports that are said to grade teacher prep programs at colleges and universities in the city. The department put out a news release this week with the headline: “New York City Becomes the First Major School System in the Country to Comprehensively Collect and Analyze Data on New Teacher Hires from Post-secondary Schools of Education.” You can find individual reports on a department webpage under the title “Human Capital Data.”
The U.S. Education Department under Secretary Arne Duncan has been pushing “accountability” on teacher prep programs, using the standardized test scores of the students of the programs’ graduates as a key measure, despite warnings from testing experts that this is an unreliable way of evaluating teachers. New York is the first to do it but this effort is going to be reproduced around the country
New York’s Teacher Preparation Program Reports are said to “analyze the quality, distribution and retention of new teacher hires” graduating from 12 college and university education programs” that supply the most educators to the system from 2008-2012. Duncan praised the reports, saying they are a “major step forward” in strengthening teacher preparation. Included in the department’s release is a statement from Duncan that says in part:
The data in this report will support smart decision making and improvement at many levels.
Data used for the analyses of the schools, including Columbia University and New York University’s teacher prep programs, look at six data points, including the percentage of teachers from a particular school that were hired into the city’s highest-need schools, and tenure decisions made about teachers. Standardized test scores of students of the graduates factor into the reports in a big way. But if you look at the “rules” that were used in the analyses, attached at the bottom of each in tiny lettering, you discover some problems with the data. For example, Rule 7:
NYS Growth Scores chart includes 4-8th grade Math & English Language Arts teachers in SY 2011-12 who received a score. Due to small n sizes, results should be interpreted with caution.
There are other cautions as well about varying sample sizes and about the fact that test score information as available only for English and math, even though the reports are not said to be only about English and math teachers.
The following rules were used in the analyses:
1.) Due to rounding, totals may not equal 100% or the sum of individual components; sample sizes vary across charts because some data are not present for all applicants.
2.) Data set includes new traditional-pathway teacher applicants hired by 10/31 in the years listed; analyses exclude alternative pathway applicants such as the NYC Teaching Fellows.
3.) Teachers were linked to undergraduate/graduate programs using the most recent certification recommendation verified by the New York State Education Department, provided it was granted after 2/2/2004 and prior to 2/1 of the hire year.
4.) Due to changes in departmental hiring policies following the implementation of hiring restrictions in SY2009-10, highest-need license analysis does not include SY2008-09.
5.) Citywide tenure in this report includes first decision only (subsequent decisions among those previously extended not included). In addition, tenure findings do not include teachers from alternative pathways. Therefore, results may differ from citywide rates reported elsewhere. SY 2012-13 tenure results are current as of 7/29/2013.
6.) Highest-need schools include (1) Districts 75 and 79, Young Adult Borough Centers (YABC), and transfer schools, or (2) the top 25% of need as measured by prior year Progress Report peer index.
7.) NYS Growth Scores chart includes 4-8th grade Math & English Language Arts teachers in SY 2011-12 who received a score. Due to small n sizes, results should be interpreted with caution.
As it turns out, the 12 schools all got decent reports, though some better than others. Though the department said the reports are intended to be a “first step” to opening “a dialogue” with teacher prep programs on how to improve what they do, there is no doubt that the results will be taken as far more authoritative than they have any right to be seen.
Certainly it is a reasonable and necessary goal to improve teacher preparation programs. But promoting “grading” schemes of programs that are based on questionable if not useless data is nothing but a waste of money. New York City Mayor Michael Bloomberg, a billionaire, may have money from his personal fortune to spend on such an exercise but shouldn’t public funds be spent in more productive ways — in New York and everywhere else?