There isn’t a day that goes by that someone in the world of education isn’t issuing a report, data point or other form of research to make a point that conflicts with another point that also has a report, data point or other form of research to back it up. Here’s a piece on why the “research wars” are winless, written by Matthew Di Carlo, senior fellow at the non-profit Albert Shanker Institute, located in Washington, D.C. This post originally appeared on the instituteâ€™s blog.
By Matthew Di Carlo
In a recentÂ post, Kevin Drum of “Mother Jones” magazine discusses his growing skepticism about the research behind market-based education reform, and about the claims that supporters of these policies make. He cites a recentÂ Los Angeles TimesÂ article, which discusses how, in 2000, the San Jose Unified School District in California instituted a so-called â€śhigh expectationsâ€ť policy requiring all students to pass the courses necessary to attend state universities. The reported percentage of students passing these courses increased quickly, causing the district and many others to declare the policy a success. In 2005, Los Angeles Unified, the nation’s second largest district, adopted similar requirements.
For its part, theÂ TimesÂ performed its own analysis, and found that the San Jose pass rate was actually no higher in 2011 compared with 2000 (actually, slightly lower for some subgroups), and that the district had overstated its early results by classifying students in a misleading manner. Mr. Drum, reviewing these results, concludes: â€śIt turns out it was all a crock.â€ť
In one sense, that’s true â€“ the district seems to have reported misleading data. On the other hand, neither San Jose Unified’s original evidence (with or without the misclassification) nor theÂ TimesÂ analysis is anywhere near sufficient for drawing conclusions – “crock”-based or otherwise – about the effects of this policy.Â This illustrates the deeper problem here, which is less about one â€śsideâ€ť or the other misleading with research, but rather something much more difficult to address: Common misconceptions that impede deciphering good evidence from bad.
In the case of San Jose, regardless of how the data are coded or how the results turn out, the whole affair turns on the idea that changes in raw pass rates after the “high expectations” policyâ€™s implementation can actually be used to evaluate its impact (or lack thereof). But thatâ€™s not how policy analysis works. It is, at best, informed speculation.
Even if San Jose’s pass rates are flat, as appears to be the case, this policy might very well be working. There is no basis for assuming that simply increasing requirements would, by itself, have anything beyond a modest impact. So, perhaps the effect is small and gradual but meaningful, and improvements are being masked byÂ differences between cohortsÂ of graduates, or by concurrent decreases in effectiveness due to budget cuts orÂ other factors. YouÂ just can’t tellÂ by eyeballing simple changes,Â especially in rates based on dichotmous outcomes. (And, by the way, maybe the policy led to improvements in other outcomes, such as college performance among graduates.)
Conversely, consider this counterfactual: Suppose the district had issued â€śaccurateâ€ť data, and theÂ L.A. TimesÂ analysis showed pass rates had increased more quickly than other districts’. Many people would take this as confirmation that the policy was effective, even though, once again, dozens of other factors, school and non-school, artificial and real, in San Jose or statewide, might have contributed to this observed change in raw rates.
These kinds of sloppy inferences play aÂ dominant roleÂ in education debates and policy making, and they cripple both processes. Virtually every day, supporters and critics of individuals, policies, governance structures and even entire policy agendas parse mostly-transitory changes in raw test scores or rates as if theyâ€™re valid causal evidence, an approach that will, in the words ofÂ Kane and Staiger, eventually end up â€śpraising every variant of educational practice.â€ť There’s aÂ reason why people can – and often do – use NAEP or other testing data to “prove” or “disprove” almost anything.
Nobody wins these particular battles. Everyone is firing blanks.
Back to Kevin Drum. He presents a list of a few things that set off his skepticism alarms. Some of them, like sample size and replication, are sensible (thoughÂ remember that even small samples, or just a few years of data from a single location, can be very useful, so long as you calibrate your conclusions/interpretations accordingly).
His “alarm system” should not have allowed theÂ L.A. TimesÂ analysis to pass through undetected, butÂ his underlying argument — that one must remain “almost boundlessly and annoyingly skeptical” when confronted with evidence — is, in my view, absolutely correct, as regular readers of this blog know very well (especially when it comes to the annoying part).
The inane accusations that this perspective will inevitably evoke -e.g., “protecting the status quo” — should be ignored. Policy makers never have perfect information, and trying new things is a great and necessary part of the process, but assuming that policy changes can do no harm (whether directly or via opportunity costs) is as wrongheaded as assuming they can do no good.
Still, this caution only goes so far. We should always be skeptical. The next, more important step is knowing how to apply and resolve that skepticism.
And this is, needless to say, extraordinarily difficult, even for people who have a research background. Thereâ€™s a constant barrage of data, reports and papers flying around, and sifting through it with a quality filter, as well as synthesizing large bodies of usually mixed evidence into policy conclusions, are massive challenges. Moreover, we all bring our pre-existing beliefs, as well as other differences, to the table. There are no easy solutions here.
But, one useful first step, at least in education, would be to stop pointing fingers and acknowledge two things. First, neither â€śsideâ€ť has anything resembling a monopoly on the misuse of evidence. And, second, such misuse has zero power if enough people can identify it as such.
The views expressed in this post do not necessarily reflect the views of the Albert Shanker Institute, its officers, board members, or any related entity or organization.