Tom has never invited me to have a beer with him (mine’s a Dogfish Head 90 Minute IPA, if and when he does), but we’ve had friendly occasional online interactions. This doesn’t make him any less wrong. Dan Drezner wrote a piece about the same research a year ago and pointed out that policymakers are perfectly happy with abstract models when they’re built by economists. Their allergic reaction to political science models (which often draw on very similar assumptions and techniques to their economic cousins) plausibly has more to do with a sense of wounded dignity (no one likes to think that their unique contribution to world politics can be explained by an abstract model) than any considered evaluation.
This is a pity, and a problem. Sometimes political scientists have important things to say. One good example was the Iraq War, widely regarded in retrospect as an enormous foreign policy catastrophe. Political scientists had a lot to say in the lead up to the war — their expert knowledge indicated strongly that it was a very bad idea. Unfortunately, no one was listening to them. As a recent paper by James Long, Daniel Maliniak, Sue Peterson and Michael Tierney describes it:
[S]cholarly opinion differed markedly from that of the general public in the run-up to and throughout the war in Iraq, but academic views were not well represented in the public discourse on the war. First, we find that IR scholars opposed the war in Iraq from the beginning. Unlike public opinion, scholarly opinion showed no “rally ‘round the flag’” effect, in which an international crisis or war generates significant, short run increases in public approval of the president (Mueller 1973). Second, scholarly opinion on the war remained remarkably stable over time. The actions and rhetoric of U.S. policy officials and important events, such as the beginning of the Iraqi civil war in 2006 or the reduction of violence in Iraq following the “surge” in 2007, did not change scholarly opinion, although these events had significant effects on public opinion. Third, differences in opinion between IR scholars and the general public can be explained in part by ideology, as conservative IR scholars were more likely than liberal scholars to support the invasion, and liberal scholars far outnumber their conservative counterparts. Even when we control for ideology, however, we find that IR scholars overwhelmingly rejected central components of the Bush administration’s Iraq policy in far greater percentages than did the general public.
Unfortunately, this expert consensus was completely ignored by media and policymakers:
We do not know why scholarly opinion against the war did not find its way onto the op-ed pages of America’s newspapers. IR scholars may simply have chosen to remain silent. This seems unlikely, however. Several dozen highly influential scholars placed an ad in the New York Times in 2002 opposing the use of force in Iraq, and hundreds of IR scholars signed an open letter in the New York Times in 2004 opposing what they saw as the Bush administration’s “misguided” policy in Iraq. It is doubtful that they would then hide their heads in the sand, or that even a small minority of those scholars would not continue to try to influence U.S. policy in Iraq by writing analyses, op-eds, and articles.
Some parts of Ricks’s indictment ring true. As Long and his colleagues note, IR scholars are very bad at quickly applying their expertise to contemporary problems (something that Ricks has noted in the past). More broadly, political scientists often don’t have any experience in writing for a broader public (something we’re trying to help change at the Monkey Cage). The problem is, however, that when political scientists do have value to add, policy makers often don’t care to listen to them. That’s a problem that Ricks — and other foreign policy commentators and journalists — could help solve.