The criticisms of this list come mainly in two forms, one practical and one theoretical: First, some analysts lambast the index because it prioritizes the whole concept of “failed states” as this grave threat to be addressed through renewed state-building, or rather that it divides the world up willy-nilly between strong and weak states, a distinction some say is nonexistent, since there are more capable states with poorly run provinces (Mexico) and less capable states with well-managed provinces (Pakistan). As Michael Mazaar notes in Foreign Affairs, “The obsession with weak states was always more of a mania than a sound strategic doctrine.”
A second criticism is that the list is poorly tabulated and dangerously conflates key concepts in what makes a state a state. This leads to obvious tautologies (like using “violence” as an indicator for an index ostensibly used to predict violence). These critics, including Bridget Coggins, who contrary to Mazaar thinks the concept of failed states can be rescued, take to task its use of, say, refugee flows as an indicator of state strength, since refugees unable to flee (like in North Korea) could be an indicator of either state weakness (people feel unsafe) or of state strength (the state is capable of coercing people not to leave).
While the index is a convenient horse to whip, we believe it is a worthy effort (as we’ve argued before), not just as a useful heuristic but as a tool for implementing smarter policy. States at the bottom of the pile generally are more efficiently run and safer places to live than those at the top of the heap. What is disputable is what shifts in individual states’ rankings mean from year to year – whether we should, say, applaud Iran for being “most Improved” or scold Philippines, an American ally, for its slipping up the rankings. What does it mean that South Sudan nudged out Somalia as the world’s most fragile state? Equally important, as a new paper by Judith Kelley and Beth Simmons that focuses on human trafficking shows, at least in this context, rankings can have social power. In short, being included on an annual list can lead to legislative change, policy adjustments, and reprioritizing.
Ways to Improve the Index
Be More Transparent
The index is a composite of 12 indicators – demographic pressures, economic decline, brain drain, levels of foreign assistance, among others – and based on the Fund for Peace’s proprietary Conflict Assessment System Tool (CAST) platform, which analyzes and distills millions of bits of information into a list that is supposed to be digestible, informative, and presumably accurate.
But how can we tell? There are lots of indicators that would seem to measure state capacity that are not captured by the index, such as its extractive capacity. (For example, why not include an indicator like census frequency?) The FSI does little to account for variation in state size, which would obviously impact how well it can provide public goods like paved roads far from the capital. It’s easier in tiny Rwanda than in sprawling DRC. Or why not measure things like administrative capacity or effective governance (i.e. if a state has a freedom of information law on its books that is enforced)?
Measure What it Claims to Measure
Instead, the FSI plays it safe. It attempts to measure what we presumably already know rather than adding value to what we presumably don’t know yet. If Burma releases dissidents or Iran engages in nuclear talks, they get a favorable bump in their rankings, even though the capacity of their respective states has arguably not budged an iota. Indeed, the reasons for countries sliding up and down from year to year seem to owe more to politics rather than state-building initiatives. This much is admitted by the FSI’s co-director, J.J. Mesner, who told Voice of America, ” [T]here are still significant tensions between North and South Korea, but they are perhaps not quite as intense now as they were a few years ago.” But that implies that a decrease in tensions somehow makes North Korea a stronger state, when presumably one could argue the opposite. Likewise, Iran was voted “most improved country” because, among other things, it agreed to hold nuclear talks with the West. But how does that translate into less fragility? If anything, its olive branch to the West may make the state less stable internally.
While we applaud the move to ditch the overused word “failed” from its title, “fragile” is a term equally fraught with controversy. What makes a state “fragile” exactly? Is it relative or absolute? Is that the same thing as a “weak state”? And, calling all grammarians, what does “Most Worsened” mean?
Dump the War-Porn
Finally, why demean the endeavor by including the FSI’s Postcards from Hell slideshow, whose gory images look like a “Game of Thrones” advert, not an accurate portrait of what “fragility” looks like? After all, a state like Ukraine can be on the verge of collapse, but not look like “hell,” whereas Syria looks increasingly firmly in Alawite control but could easily be described as “hell.” Not all failed or fragile states are hotbeds of Hobbesian anarchy. The goal of the index should be to hold some real practical value, not to serve as conflict-porn or click-bait.
Hold Predictive Value
For the tool to be useful, it should be predictive. Can we sort countries based on their susceptibility to certain undesirable outcomes? Yes, and probably better than just doing back-of-the-envelope guesses, which is what this arbitrary ranking essentially is. A look back at Ukraine’s 2013 score, for instance, gives no evidence that the country was about to be carved up. In 2010, states like Syria, Libya, Egypt and Tunisia fell into the same bracket as Brazil, Turkey, and Russia. Another way perhaps to avoid its perennial failure at anticipating events is for the index to examine provinces, given the vast amount of sub-variation within countries. That might require a messier map of the world, but at least it would be more accurate (and account for what anthropologists call “alternatively governed” spaces).
A better a way forward might be to think about clustering countries. The differences among countries may be ordered but any kind of exact ranking obscures the reality of the situation as the South Sudan/Somalia example demonstrates. Most reasonable people would say both states have serious internal (and external) threats, but ranking them reminds us of the obscure debates historians have over who was worse: Hitler or Stalin. Those dictators might be in a category or cluster, which includes Pol Pot or Idi Amin, but ranking them is nonsensical.
The new index seems to implicitly follow a conceptual schema that groups types of states rather than creates intervals with equal values in between. Similar to Freedom House, a revised index could have clusters of grouped countries at risk (of what we aren’t sure, more on this later). Knowing the characteristics of events that occur within each cluster might be helpful. For example, we might find more hate crimes in relatively stable systems (see the U.S. post-9/11) and more ethnic riots in less stable ones.
Fragility, to paraphrase Alexander Wendt, is what we make of it, which explains why the Fragile States Index comes under such harsh criticism each year. Let’s not throw the baby out with the bathwater just yet. But let’s admit that at 10 years old, the FSI is no longer a precocious toddler and should be either reformed or scrapped. Otherwise, like other listicles, it risks becoming a conversation starter for parlor debates, not a serious tool for policy-making.