Unfortunately, something went wrong during the calculations. In my attempts to replicate the assessment, I found that the OECD misclassified a large number of states, a mistake that could have real-world repercussions. Recent research by Judith Kelley and Beth Simmons shows that international indicators are an influential policy tool. Indicators focus international attention on low performers to positive and negative effect. They cause governments in poorly ranked countries to take action to raise their scores when they realize they are being monitored or as domestic actors mobilize and demand change after learning how they rate versus other countries. Given their potential reach, indicators should be handled with care.
The easiest way to understand the OECD fragility assessment is with the impressive visualization of the results, a five-dimensional Venn diagram that I call the Pentagram of Fragility. For each of the five dimensions, the OECD created an index by taking the average of three inputs that are normalized to range from 0 to 100. For example, the Violence dimension is calculated by taking the average of a country’s scaled values on number of battle deaths, number of interpersonal injuries, and political instability indicator from the World Bank. The 50 countries with the lowest averages are considered vulnerable in that dimension. The pentagram and the report focus on those countries that are vulnerable in two or more dimensions.
Do the classifications make sense? With indices like this, cherry-picking a country and arguing against the result is like shooting fish in a barrel. Afghanistan stands out as it is listed as vulnerable in every cluster except Institutions. Is that credible? Afghanistan regularly ranks near the bottom on corruption indicators. Gen. John R. Allen identified corruption as “the existential threat to the long-term viability of modern Afghanistan.” How then did Afghanistan not make the bottom 50 for the Institution dimension?
After replicating the classifications, I found Afghanistan does belong in the Institutions cluster. However, Afghanistan wasn’t the only one; more than half of the countries are misclassified! It is impossible to say what went wrong since the report does not follow any of the replication best practices that are highlighted in the OECD handbook for constructing indicators and making their way into political science (on that note, my replication, including the data sets and calculations, is available on GitHub). All I can say is that despite several attempts to follow their basic recipe, what comes out doesn’t match the picture on the box.
The underlying data is not released with the report, and the citations are often unclear. I pushed on, gathering the data sets including hand-coding health-care capabilities from a map (a tedious reverse paint-by-number). After cleaning and merging, I calculated the indices using the report’s methodology. The methodology is vague in several places, such as how to handle missing data or whether to include territories and small island states. The report says it uses data from “2012 or most recent year,” which could mean closest to 2012 without going over — standard The Price is Right rules — but could also mean using 2013 data when available.
For each ambiguity, I tried every interpretation with similar results, but no matter how I interpret the OECD’s methodology, I find that more than half of the Venn diagram is wrong. Only 30 of the 70 listed states are classified correctly. I show the edits found in one of my closer matches on the Venn diagram below.
These errors are not minor. Some states make small hops over cluster lines, but other changes are substantial. North Korea does not appear on the OECD’s original Venn diagram but has been on the one-dimensional Fragile State Index every year since 2008. In my replication, North Korea is ranked as the absolute worst country for Institutions and Resilience.
North Korea may have been excluded because it is missing some inputs event though it has less missing variables than Kosovo or Somalia, which do make the diagram. The Philippines is not missing any data and similarly did not make the OECD diagram, but in my replication it ranks 19th from last in Violence and 15th from last for Resilience. On the other end of the spectrum, I drop Lesotho from the diagram as it ranks as 124th from the bottom for Violence, far from the bottom 50 cutoff.
The research by Kelley and Simmons show how these indices make a difference for states that make the list. In addition, the international community is increasingly using indicators like this to frame discussions on foreign assistance. One of the OECD’s goals with the new indicators, for example, was to inform the Sustainable Development Goals, which will replace the Millennium Development Goals post-2015.
The issues highlighted here should serve as a call to arms for some changes in the policy community. Specifically, producers of indices and policy reports need to get serious about instituting norms of best practice for sharing data and methods that inform the policy reports they publish. Besides helping prevent mistakes, this would also make their findings and policy recommendations more robust and encourage the kind of engagement we need given the challenges we face.
Thomas Scherer is a PhD candidate in international relations at Princeton University and a senior research specialist at the U.S. Institute of Peace. The views expressed in this article are his own and not those of the U.S. Institute of Peace