Last week we ran a mini–symposium organized by Frank Gavin on what policymakers can learn from recent academic research into nuclear weapons. Here Todd S. Sechser and Matthew Fuhrmann respond to some of the issues raised in that symposium.
— Erik Voeten
Sixty-nine years ago this week, the United States detonated the world’s first atomic device in the New Mexico desert. Decades later, however, our understanding of the political dynamics of nuclear weapons is surprisingly tenuous.
Our 2013 study in the academic journal International Organization aimed to move the ball forward by applying new data to an important question: do nuclear nations have more coercive leverage than everyone else? Scholars have long believed that nuclear weapons are useful for preventing aggression, but what about engaging in aggression? Can nuclear nations more easily compel rivals to give up valuable possessions or change their behavior? If so, then we should be especially worried that Iran or another rogue country will acquire nuclear weapons and use them to blackmail their neighbors.
But we found a surprising answer: Nuclear weapons don’t seem to help much. We examined more than 200 coercive attempts (spanning nine decades) and found that nuclear and nonnuclear countries have about the same rate of success. The primary benefit of nuclear weapons seems to be for deterrence, not for coercion.
Why is this the case? Technologies like nuclear weapons give countries more coercive power if they make it easier to physically take things (like territory) from adversaries, or if they can be used to punish rivals at an acceptable cost. However, most of the time, nuclear weapons don’t do either of these things (see here for more discussion). So if Iran hopes to bully its neighbors with nuclear weapons, history suggests that it is likely to be disappointed.
Our article sparked some debate, in part because its conclusions contradicted those of another article in the same journal issue. But our approach of using statistical research methods proved just as controversial. Historian Francis Gavin (whom we thank for organizing this terrific symposium) argued in a recent essay that statistical studies like ours are fraught with pitfalls. Historical case research, in his view, is a better way to answer questions about nuclear politics – and a better way to generate useful policy advice. Look at the documents, he implores – not the numbers.
But Gavin doesn’t fully appreciate how useful quantitative analysis can be for certain questions in nuclear security. One of the most important virtues of a statistical approach is that it can identify broad patterns across large numbers of cases – patterns that might be missed if we studied only a few high-profile cases. And in the study of nuclear coercion, scholars knew surprisingly little about those broader patterns when we launched our project on coercive diplomacy. Our study revealed, for the first time, that nuclear powers generally don’t make more effective threats than nonnuclear countries. This is contrary to what many longstanding theories of nuclear diplomacy predict, and a pattern that previously had not been recognized. A statistical approach made it much easier to uncover this trend.
Because statistical models are useful for identifying broad trends, they are also very good at identifying exceptions to those trends. In his earlier post, Colin H. Kahl correctly points out that policy makers care deeply about exceptional cases, especially when it comes to nuclear weapons. But how do we know which cases are typical and which are outliers? This is where statistical analysis excels. It can tell us which cases are on the trend line, and which are far from it. When a new policy challenge arises – Iran, for example – statistical models can tell us whether or not it fits the mold of the “typical” case. Perhaps Iran is an exceptional case (though we doubt it), but until we know the broader trend, we don’t even know what constitutes “exceptional.”
A quantitative approach can also find patterns that aren’t contained in archival documents. For example, some scholars have argued that nuclear weapons matter even when nobody is talking about them – simply by looming in the background, they can coerce. Is this claim true? Looking at archival documents would not necessarily give us the answer, since this is a prediction about what leaders do rather than what they say. Indeed, the nuclear shadow may be so obvious to leaders during crises that they don’t bother mentioning (or recording) it, leaving few bread crumbs for us to find in the archival trail. A statistical approach is better for this particular question, since it can tell us whether leaders actually behave differently when nuclear weapons are present, even if they don’t always say so.
But let’s be clear: statistical methods are only one approach, and there are limits to how useful they can be. Not everything can be quantified, and as Alexandre Debs points out, quantitative models are no substitute for theories that explain the patterns we find in the data. And some research questions simply cannot be answered with statistical data, as we acknowledged in our response to Gavin’s essay. Quantitative models are not a panacea; they are but one tool in the researcher’s kit.
So, while our International Organization study provides some new insights into the nuclear statecraft puzzle, it isn’t the last word on the subject. Indeed, since we published our article, we have been completing a book that utilizes both quantitative and historical techniques in order to paint a more complete picture of the puzzle of nuclear statecraft. In that book, we conduct more than a dozen case studies that draw extensively from archival work done by Gavin and other historians.
Academia is a competitive enterprise, and it is tempting to see the field of nuclear studies as a “cage match” between political scientists and historians. But this is a mistake. A subject as grave as nuclear war demands all the tools at our disposal, and we need many forms of expertise to fully understand how nuclear weapons shape international politics. Rather than debating about which is the single best way of doing research, we would be wise to exploit each discipline’s unique skills and insights, and keep our intellectual portfolio diversified.