This is the second post in our mini symposium on what policymakers and the public can learn from recent academic research on nuclear weapons. Peter Feaver is professor of political science and public policy at Duke University and was director for defense policy and arms control on the National Security Council of President George W. Bush.
— Erik Voeten
The title might be the lead-in for a joke, of the “a policymaker, a political scientist and a historian walk into a bar …” variety. And the truth is that, too often, policymakers do joke about outside experts, expecting little and getting less. But policymakers do want something from experts, although they care little whether it is from card-carrying academics or from other experts with solid academic training who operate within the vast community of think tanks and intelligence analysis shops. In fact, when I was working on the Bush National Security Council (NSC) staff from 2005-2007, one of my auxiliary duties was to make sure my colleagues were hearing from the best outside experts, especially academics.
I would argue that what policymakers want and what they fear are both well represented in the vigorous debate captured by Frank Gavin’s review essay, “What We Talk About When We Talk About Nuclear Weapons,” (hereafter “Gavin”) the strong rebuttal by Matthew Fuhrmann, Matthew Kroenig and Todd S. Sechser (hereafter “FKS”), “The Case for Using Statistics to Study Nuclear Security,” and the subsequent comments by Scott Sagan, Marc Trachtenberg, Bob Jervis, Hal Brands and others (all available here). As is often the case in the policy-academy interface, the insights of greatest value are spread over a wide-ranging discourse – and the insights may not necessarily be the ones that the authors themselves think are of greatest policy value.
Policymakers want to know general tendencies, and this is something that the large-n literature is better suited to provide, as FKS reminds us.
Policymakers also want to know whether the policy case they are currently working on fits general tendencies or, for some reason, is an outlier, and this is something that the historical literature (and associated critique of quantitative studies) is better suited to provide, as Gavin reminds us.
Policymakers need theories, and this is something academics supply in abundance. Policymakers may not know that they need theories, but every policy choice is a prediction that can be expressed in the type of theory language familiar to academic political science: if we do X then Y will (or will not) happen. It is true that sometimes policymakers are reduced to mere guessing, but more often policymakers base their prediction on some implicit causal theory that links inputs to outputs, as in “threats of airstrikes change leader’s cost-benefit calculations and cause them to conclude it is better to acquiesce than to continue to defy us.”
Policymakers need to know the limits of expert advice. Well, policymakers already know this. As the historical record makes painfully clear, the field of nuclear studies is replete with expert assessments that proved wanting. And here is where the various types of experts may separate into different categories. On the one hand, there are the experts in the intelligence community with a remarkable record of first underestimating the nuclear progression of the Soviet, Chinese, Indian, Iraqi (1990), North Korean and Pakistani arsenal, adjusting, and then famously over-estimating the nuclear progression of the Iraqi (2002) arsenal. Ever since, the IC has been especially careful to hedge every judgment about the next proliferator, particularly Iran, so as not to be caught with a confident prediction that can be proven wrong. Academic experts, however, face much less accountability, and a sampling of the commentary by academics on the Iran case shows a remarkable divergence of opinion, all of it expressed with an even more remarkable unhedged, unqualified confidence (the nee plus ultra of this pattern is the late Kenneth Waltz, who asserted that the West should stop trying to prevent Iran from getting a nuclear weapon and instead should help them cross the nuclear threshold).
Here it seems to me that both quantitatively oriented academics and qualitatively oriented academics could benefit from greater humility about the policy implications of their research. If you could persuade a policymaker to wade through the original articles, the review, the responses to the reviews, the response to the responses and the auxiliary commentary, such a long-suffering policymaker might be forgiven for asking: So what is the bottom line? Would a nuclear Iran be more dangerous for American national security interests or not? The individual pieces offer contradictory insights on this, and the contradictions do not break down neatly along methodological lines – for instance, there is a nice little debate about the utility of nuclear threats just between the quantitative studies of Kroenig vs. Fuhrmann and Sechser.
What policymakers would like to see more of, and which the Gavin-FKS-et al. commentary starts to develop, is more explicit and self-aware statements about the limits of one’s own expert judgments. When statistical analysis concludes that, other things being equal, nuclear superiority might tend to make a state more risk-acceptant, the author should be more explicit that this finding, though relevant to policymakers, is hardly dispositive on the utility of nuclear superiority, whether in general (there are many other important aspects of diplomacy beyond risk-taking) or in the specific (there are many reasons why, in a particular case, we might expect that the general pattern will not hold). Likewise, when a close historical examination concludes that the coding of key variables in the Berlin Crisis is uncertain, the author should be candid that this finding, though relevant to policymakers, is no more dispositive on the dangers of Iran holding nuclear weapons, either.
Of course, experts cannot supply everything that policymakers want. Policy is not merely about making predictions, it is also about making subjective valuations. Is a world without an Iranian nuclear arsenal worth risking another war in the Middle East? Experts can provide lots of analysis to shed light on aspects of that decision, but at the end of the day it is a judgment call that turns on normative assessments on which reasonable experts could disagree.
And reasonable disagreement is a good baseline for policy debate. Too often, I have seen academics over-value their own policy conclusions from their research. Then, when policymakers choose a different policy, the academics reach for conspiracy theories and the pernicious effects of special interests to explain why their own policy convictions failed to carry the argument. Of course, nefarious motives can infect the policy process and I would be the last person to dismiss the politics of the policymaking process as inconsequential, but if the experts themselves collectively reach such uncertain judgments, regardless of the method, is it any wonder the policymakers do too?
Put another way, the hardest part of the policymaker-academic conversation may not be listening to academics, but figuring out which academic, in this instance, happens to be right. The academics in this exchange make a good case for earning the policymaker’s ear and, I suspect, they will have it, even if they may not always agree with what results.