These results might reaffirm the beliefs of some that academics have lost touch with reality. Yet, there is little evidence that others did any better. The intelligence community picked up some signs but these were not translated into an actual warning that made it to the top levels of the U.S. or Ukrainian political decision-making structures (although we may find out more about that later). Pundits were writing confidently that Russia would not intervene even as Russian troops were slipping into the country.
So, what went wrong? The good folks at the TRIP project shared their data with me (stripped from identifying information) such that I could take a closer look.
The graph above displays the percentage of scholars within a subgroup who correctly claimed that Russia would intervene militarily (green dots) and the percentage who proclaimed that it would not (red crosses). The remainder chose “don’t know” (not displayed). The graph is ordered by the subgroup least likely to correctly predict that Russia would use military force. The number of scholars in each subgroup is in brackets.
First, some comforting news: scholars who study international security or Russia (or Eastern Europe) as a primary or secondary specialty were more likely to foresee the intervention. It pays (a little bit) to listen to those who know what they are talking about.
Second, scholars who work at a Top-25 institution (as identified by TRIP) were least likely to be correct. This is consistent with Philip Tetlock’s finding that the more famous and successful the pundit, the less accurate the predictions. Perhaps in academia, as in punditry, forcefulness, confidence and decisiveness pay even as these qualities do not translate into predictive accuracy.
Some further prying (not in the graph) shows that this is not because professors at liberal arts colleges were more likely to be accurate: it is professors at research universities somewhat lower down the food chain who were most likely to get it right. Tenured scholars were also no more likely to foresee the intervention than their untenured counterparts.
Third, scholars who use qualitative methods in their research, a dying breed if you believe some commentary (but not the data), were slightly less successful in their predictions than those who use quantitative methods (some scholars use both). The differences are too small to be meaningfully interpreted.
Fourth, and most interesting to me, are the differences related to the “paradigm wars.” International relations scholars have long classified themselves as belonging to different schools of thought, often referred to as “the isms” (see here for a primer). A growing group of scholars, myself included, worry that becoming a card-carrying member of a paradigmatic club can lead to blinders that, among others, interferes with predictive accuracy.
Consistent with this, those who do not identify with a paradigm were somewhat more likely to be accurate, closely followed by Realists. Self-identified Liberals and Constructivists did poorly, with Liberals both very unlikely to predict intervention and very likely to offer a definitive “no” rather than the “don’t know” answer that was very popular among Constructivists (who sometimes look dimly on the predictive ambitions of social science).
Perhaps a misplaced faith in the power of international law and institutions was at the root of this. After all, the Russian intervention violates a system of laws and norms that these paradigms hold dearly. Yet, non-realist scholars who study international law or international organizations as their primary or secondary field were more likely to foresee the military action (see graph).
Delving deeper into the data, I found that only 7 percent of the 150 self-identified Liberals and Constructivists who do not study international organizations and law foresaw the Russian military intervention. By contrast 15 percent of the 87 Liberals and Constructivists who study international law and organization got it right. This is admittedly speculative but it may be that paradigms impose blinders especially outside of ones field of study. Only 5 percent (4) of the 87 Liberals and Constructivists who do not study international security, Russia or international organizations and law correctly predicted a military intervention.
All of these findings ought to be taken with a hefty grain of salt. The sample is pretty small once you start breaking it down into subgroups. Moreover, if there were a subgroup called “conspiracy theorists,” who see military intervention lurking behind any crisis, we would have declared them clairvoyant based on this one prediction exercise. This is why continuation of these snap polls is so important: it helps expose our biases in a systematic way. Finally, none of this should distract us from the most important conclusion: that most scholars (including me) got it wrong.
[Edited to remove an inaccurate description in the third paragraph of the past use of snap polls]
Postscript: On request: in a multiple regression analysis (whether by OLS or (ordinal) logit) the two covariates that have robust sizable and significant (p<.01) negative effects on predicting a military intervention are being at a Top 25 institution and self-identifying with the Liberal school of international relations. I did not find an interactive effect between these two covariates. The significance of the other covariates depends on model specification.