The money is given to individual state educational agencies, which are then tasked with running a competition in which school districts can compete for some of the money by showing their need for the cash and their plans to improve under one of four approved models for change. (Among the four models are the “transformation” model, which requires a school to replace the principal, institute an evaluation system for teachers that includes student test scores and implement other reforms, as well as the model in which the school is closed and students are assigned to other schools). The three-year-old program, known as SIG, has under the Obama administration awarded grants in two cohorts to more than 1,500 of the country’s lowest-performing schools, the department said.
Education officials tout what they say are positive results from SIG. In a recent news release about the latest SIG awards, the department noted that “early findings show positive momentum and progress in many SIG schools.” This followed the Education Department’s release in November of its second annual report on SIG results, saying that some two-thirds of schools that got grants for 2010-11 saw math and reading test score gains while one third of the schools saw declines. Schools that got grants for the 2011-12 school year did by and large worse in terms of test score gains than the first group.
But this Education Week post by Alyson Klein noted some problems with that characterization of success, including the fact that the way the data was presented made it impossible to tell whether high-performing schools were pulling up lower-performing schools, and that it is impossible to compare test scores from state to state without more detailed data, thus limiting the importance of the conclusions. Klein quoted Robin Lake, director of the Center on Reinventing Public Education at the University of Washington, which has studied the impact of the SIG program in Washington state, as saying:
Given the amount of money that was put in here, the return on investment looks negligible at this point. I don’t know how you can interpret it any other way.
Then, on Dec. 11, the department announced that its November analysis had been inaccurate because “programmers erroneously excluded schools from the analysis.” New information is expected early in 2014. Here’s the text of the note on the department Web site:
The U.S. Department of Education is reviewing and revising the School Improvement Grant (SIG) analysis released to the public on Nov. 21, 2013. This review began after the Department discovered and the contractor that performed the analysis confirmed that their programmers erroneously excluded schools from the analysis. One specific example of these programming errors was the incorrect exclusion of all SIG schools in states that had an assessment change at either the elementary or high school level, when only schools affected by the assessment change should have been removed. In addition to any other necessary changes, the new analysis will include more schools than the previous analysis, therefore, the results may change. In an effort to be cautious and ensure accuracy, we have removed the analysis from our website. We hope to post updated slides on the Office of School Turnaround website in January 2014. Please note that the errors were solely in the analysis of SIG schools and have no effect on the SY 2011-12 school- and district-level assessment data, which also was published on data.gov on Nov. 21, 2013.
There are, too, always questions about the validity of any program that measures its success strictly on standardized test scores.
Here are the state awards in the latest SIG grants:
NORTH CAROLINA: $13,610,781
RHODE ISLAND: $1,611,540