We just watched the Golden Globe Awards, and now we have the Bunkum Awards.

The Bunkum Awards?

Presented by the National Education Policy Center, which brings together interdisciplinary scholars at the University of Colorado, Boulder, these awards are given for what the presenters say is “shoddy” educational research, “work based on weak data, questionable analysis and overblown recommendations.”

The awards get their name from Buncombe County, N.C., where, in 1820, Rep. Felix Walker delivered “a speech for Buncombe” on whether Missouri should be admitted to the United States as a free or slave state, and he rambled on so much that his colleagues yelled at him to stop. From then on,  “bunkum” came to mean long-winded nonsense.

Here’s a list of the awards for 2013 research, with links to a review by the National Education Policy Center of each of the “winning” studies. You can watch a video of the awards “ceremony” here, hosted by David Berliner, the Regents’ Professor Emeritus,  former dean of the College of Education at Arizona State University, fellow of the National Academy of Education and past president of the American Educational Research Association. You can read more about each award here.

The Grand Prize winner was given to the the Brookings Institution and its Brown Center on Education Policy for its second annual Education Choice and Competition Index (ECCI) and the third annual ECCI released last week, which, the research center says, “use a rating scale [for over 100 school districts] best described as political drivel, based on 13 indicators that favor a deregulated, scaled-up school choice system”  and that “are devoid of any empirical evidence that these attributes might produce better education.” The center’s review of the Brookings Index can be found here and of its NYC report here

Two organizations won in the category “Look Mom! I Gave Myself an ‘A’ on My Report Card!” Each created their own grading system to evaluate states on school reform based on measures that they thought mattered, and gave an “F” to states that didn’t agree with them, according to policy center director Kevin Welner, a professor at the University of Colorado Boulder. 

The first of the two is the American Legislative Exchange Council (ALEC), with its Report Card on American Education, which claimed that its grades were “research-based”  but instead, the awards presenters said, were “a compilation of cherry-picked assertions from other advocacy think tanks.” You can read the policy center’s review here

The second winner in the category is Michelle Rhee’s advocacy group StudentsFirst, which identified 24 grading measures that it deemed important regarding school choice, test-based accountability and governance changes, but never explained how these particular measures affected student outcomes. The center’s review is here.

The “Do You Believe in Miracles?” Award went to the Public Agenda Foundation for “Failure is Not an Option: How Principals, Teachers, Students and Parents from Ohio’s High-Achieving, High-Poverty Schools Explain Their Success,” which essentially claims that schools following a specific reform pattern can overcome the effects of poverty, unemployment and poor health care on student performance. It can’t, but reformers like to accuse people who bring up these as issues of providing “excuses” for teachers. The center said that Public Agenda’s recommendations – “engage teachers,” “leverage a great reputation,” “be careful about burnout” and “celebrate success” – are “reasonable but are also obvious and almost laughable as the recommended means of ensuring that failure in high-poverty communities is truly not an option.” You can see a review of the report here

 The “We’re Pretty Sure We Could Have Done More With $45 Million” Award goes to the Bill & Melinda Gates Foundation and its Measures of Effective Teaching Project.  The foundation plowed millions to bring in researchers to look into the important issue of effective teaching, but there were problems from the beginning, the center said. From the center’s Web site:

Part of its purpose was to examine teacher evaluation methods using randomly-assigned students; unfortunately, the students did not remain randomly assigned. More troubling was the pre-ordained determination that effectiveness would be narrowly and overwhelmingly defined in terms of test-score increases. Yet when the MET researchers studied the effects of teacher observations, value-added test scores and student surveys, they found correlations so weak that no common attribute or characteristic of teacher quality could be found. So in the end, they could not define an ‘effective teacher.’

“But the actual results didn’t stop the Gates folks from announcing that they indeed found a way to measure effective teaching,” explained Welner. “And it did not deter the federal government from strong-arming states into adopting policies tying teacher evaluation to measures of student growth. The sad reality here is that this major undertaking has been undermined by the policy agenda of high-stakes accountability.”

You can read the center’s review here.

And you can read about other awards here.