Q: The goal of the Early Warning Project is to assess the risk that an episode of “mass killing” could take place. What do you consider a mass killing?
Mass-killing episodes involve the intentional killing of at least 1,000 civilians from a discrete group. The targeted group might be identified by ethnicity or religion, but it might also be identified by political affiliation—for example, opposition partisans, or alleged supporters of some rebel group.
We focus on large-scale killing because we believe it is the gravest and most urgent among the activities that international law would consider atrocities. Also, because of our interest in prevention, we look specifically at the risk of onset of these episodes. We are not trying to document or anticipate the trajectory of mass-killing episodes that are already underway.
Q: Are there a lot of mass killings? Can you give a few recent examples?
Onsets of mass-killing episodes are rare, and they have become rarer in the past couple of decades, but we can still expect to see one or more of them each year. Recent examples include ongoing episodes in Syria, where state security forces and allied militias appear to target civilians who allegedly support or sympathize with rebel groups; in Sudan’s South Kordofan region, where government forces attack civilian settlements as part of their counterinsurgency campaign; and in Iraq and northern Nigeria, where Islamic State and Boko Haram, respectively, routinely kill civilians in their efforts to seize and hold territory.
Q: Part of how you assess the risks is via statistical modeling. How does this work?
Our statistical risk assessments come from an ensemble of models that estimate risks of new episodes of state-led mass killing for all countries with populations of at least half a million. The ensemble includes three models; two of them are based on leading theories about the causes of mass killing, while the third comes from applying a machine-learning algorithm to all the variables identified by the other two.
We estimate the models using historical data. Then, drawing on the most recent available data, we use the models to generate our forecasts. This idea of combining assessments from multiple models is now common practice in the forecasting world because it almost always produces more accurate results, and that’s what we’re after here.
Q: You have paired statistical modeling with something you call the “Expert Opinion Pool.” What do these experts do, and how does it add to the modeling?
The Expert Opinion Pool elicits and combines predictions from groups of people. We’re essentially blending features of prediction markets and surveys. Instead of having participants trade stocks representing certain events, as they would do in a prediction market, we simply ask them how likely the event is to occur during some window of time—for example, “Before 2016, will there be a new episode of mass killing in Yemen?”—and they give us an answer in the form of a probability.
In that sense, the Expert Opinion Pool is more like a survey than a prediction market. But unlike in a traditional survey, participants can choose which questions to answer. They can also see the overall answers and how they are changing over time, discuss the question with other participants, and update their answers whenever they like as they see relevant news or otherwise change their beliefs.
We don’t mathematically combine the forecasts from the Opinion Pool with the ones from our statistical models, but we do try to take advantage of the ways they complement each other.
The great advantage of the Opinion Pool is its flexibility. Statistical forecasts can only be updated as often as you get fresh data, and most of the data on which our statistical models depend are updated only once each year and with a several-month delay.
So instead of waiting for a whole year to revisit the risk landscape, we can ask our pool of experts about countries or situations of greatest concern and let their responses inform us about how those risks are evolving, or how they compare in light of things the models couldn’t consider because the requisite data don’t exist.
I would be remiss if I didn’t add that we are always looking to grow and diversify our pool of volunteer forecasters. If you pay close attention to the issues we cover or to a part of the world where atrocities are more likely to occur and would like to contribute your knowledge to this endeavor, please let us know by sending an e-mail to firstname.lastname@example.org.
Q: What are, in your mind, some of the interesting or unexpected findings that your forecasting has produced?
We have only been generating statistical risk assessments since 2013, and our opinion pool has only been running for a little more than a year with a modest number of participants. Given how rare these events are, that’s not enough time to establish a track record or grade our performance with much confidence.
That said, we are encouraged by the results so far. All of the countries that have seen onsets of state-led mass killing in the past couple of years — Egypt, Nigeria and South Sudan — were ranked among the highest-risk cases by our statistical models before those onsets occurred. And although we didn’t see any clear onsets of state-led mass killing in 2014, the two cases that came closest, Iraq and Myanmar, were identified early in the year by our opinion pool as two of the most likely candidates.
In our 2015 statistical risk assessments, released just this week, most of the countries near the top of the list are familiar from previous years, but new or worsening instability has vaulted some new ones into their midst, including Ukraine and Burkina Faso.
Q: Are we likely to see more of this kind of work in the future? How can it help governments or other actors?
I expect we will see more work like this in the future, as the data needed to do it become more plentiful, the methods become better known, and as potential consumers develop more of an appetite for it. I’ve been doing work like this for more than 15 years, and during that time I think I’ve seen more people open up to the idea that predictive analytics can improve planning and decision-making by providing better information, even on very-hard-to-predict political topics.
A key point is that the forecasts don’t have to be perfect to be useful; they just have to be better than the status quo, which in most cases is not so hot. When organizations do try to anticipate events, they often rely on the judgments of individual experts or on lists produced in staff meetings, neither of which is as reliable as the techniques we’re using here. Collectively, we can do better, and the tools and raw materials needed to implement those alternatives are becoming more accessible, so I expect that practice will follow, albeit in fits and starts.