For many Americans who witnessed the attack on the Capitol last Jan. 6, the idea of mobs of people storming a bedrock of democracy was unthinkable.
“We now have the data — and opportunity — to pursue a very different path than we did before,” said Clayton Besaw, who helps run CoupCast, a machine-learning-driven program based at the University of Central Florida that predicts the likelihood of coups and electoral violence for dozens of countries each month.
The efforts have acquired new urgency with the recent sounding of alarms in the United States. Last month, three retired generals warned in a Washington Post op-ed that they saw conditions becoming increasingly susceptible to a military coup after the 2024 election. Former president Jimmy Carter, writing in the New York Times, sees a country that “now teeters on the brink of a widening abyss.” Experts have worried about various forms of subversion and violence.
The provocative idea behind unrest prediction is that by designing an AI model that can quantify variables — a country’s democratic history, democratic “backsliding,” economic swings, “social-trust” levels, transportation disruptions, weather volatility and others — the art of predicting political violence can be more scientific than ever.
Some ask whether any model can really process the myriad and often local factors that play into unrest. To advocates, however, the science is sufficiently strong and the data robust enough to etch a meaningful picture. In their conception, the next Jan. 6 won’t come seemingly out of nowhere as it did last winter; the models will give off warnings about the body politic as chest pains do for actual bodies.
“Another analogy that works for me is the weather,” said Philip Schrodt, considered one of the fathers of unrest-prediction, also known as conflict-prediction. A longtime Pennsylvania State University political science professor, Schrodt now works as a high-level consultant, including for U.S. intelligence agencies, using AI to predict violence. “People will see threats like we see the fronts of a storm — not as publicly, maybe, but with a lot of the same results. There’s a lot of utility for this here at home.”
CoupCast is a prime example. The United States was always included in its model as a kind of afterthought, ranked on the very low end of the spectrum for both coups and election violence. But with new data from Jan. 6, researchers reprogrammed the model to take into account factors it had traditionally underplayed, like the role of a leader encouraging a mob, while reducing traditionally important factors like long-term democratic history.
Its risk assessment of electoral violence in the United States has gone up as a result. And although data scientists say America’s vulnerability still trails, say, a fragile democracy like Ukraine or a backsliding one like Turkey, it’s not nearly as low as it once was.
“It’s pretty clear from the model we’re heading into a period where we’re more at risk for sustained political violence — the building blocks are there,” Besaw said. CoupCast was run by a Colorado-based nonprofit called One Earth Future for five years beginning in 2016 before being turned over to UCF.
Another group, the nonprofit Armed Conflict Location & Event Data Project, or ACLED, also monitors and predicts crises around the world, employing a mixed-method approach that relies on both machine-learning and software-equipped humans.
“There has been this sort of American exceptionalism among the people doing prediction that we don’t need to pay attention to this, and I think that needs to change,” said Roudabeh Kishi, the group’s director of research and innovation. ACLED couldn’t even get funding for U.S.-based predictions until 2020, when it began processing data in time for the presidential election. In October 2020, it predicted an elevated risk for an attack on a federal building.
Meanwhile, PeaceTech Lab, a D.C.-based nonprofit focused on using technology in resolving conflict, will in 2022 relaunch Ground Truth, an initiative that uses AI to predict violence associated with elections and other democratic events. It had focused overseas but now will increase efforts domestically.
“For the 2024 election God knows we absolutely need to be doing this,” said Sheldon Himelfarb, chief executive of PeaceTech. “You can draw a line between data and violence in elections.”
The science has grown exponentially. Past models used simpler constructs and were regarded as weak. Newer ones use such algorithmic tools as gradient boosting, which fold in weaker models but in a weighted way that makes them more useful. They also run neural networks that study decades of coups and clashes all over the world, refining risk factors as they go.
“There are so many interacting variables,” said Jonathan Powell, an assistant professor at UCF who works on CoupCast. “A machine can analyze thousands of data points and do it in a local context the way a human researcher can’t.”
Many of the models, for instance, find income inequality not to be correlated highly with insurrection; drastic changes in the economy or climate are more predictive.
And paradoxically social-media conflict is an unreliable indicator of real-world unrest. (One theory is that when violence is about to take place, many people are either too busy or too scared to unleash screeds online.)
But not all experts are sold. Jonathan Bellish, One Earth Future’s executive director, said he became disenchanted, leading him to pass off the project to UCF. “It just seemed to be a lot like trying to predict whether the Astros would win tomorrow night. You can say there’s a 55 percent chance, and that’s better than knowing there’s a 50 percent chance. But is that enough to interpret in a meaningful policy way?”
Part of the issue, he said, is that despite the available data, much electoral violence is local. “We ran a set in one country where we found that the possibility of violence could be correlated to the number of dogs outside, because worried people would pull their dogs in off the streets,” Bellish said. “That’s a very useful data point. But it’s hyperlocal and requires knowing humans on the ground. You can’t build that into a model.” Even ardent unrest-predictor advocates say that forecasting highly specific events, as opposed to general possibilities over time, is very unlikely.
Bellish and other skeptics also point to a troubling consequence: Prediction tools could be used to justify crackdowns on peaceful protests, with AI used as a fig leaf. “It’s a real and scary concern,” Powell said.
Others admit the real world can sometimes be too dynamic for models. “Actors react,” said ACLED’s Kishi. “If people are shifting their tactics, a model trained on historical data will miss it.” She noted as an example the group’s tracking of a new Proud Boys strategy to appear at school-board meetings.
“One problem with the weather comparison is it doesn’t know it’s being forecast,” Schrodt conceded. “That’s not true here.” For instance, a prediction of a low risk could prompt a group mulling an action to deliberately initiate it as a surprise tactic.
But he said the main challenges stem from a generational and professional resistance. “An undersecretary with a master’s from Georgetown is going to think in terms of diplomacy and human intelligence, because that’s what they know,” Schrodt said. He imagines a very slow transition to these models.
“I don’t think we’ll have this in wide use by January 6, 2025,” he added. “We should, because the technology is there. But it’s an adoption issue.”
The Pentagon, CIA and State Department have been moving on this front. The State Department in 2020 created a Center for Analytics, the CIA hires AI consultants and the military has embarked on several new projects. Last month, commanders in the Pacific announced they had built a software tool that seems to determine in advance which U.S. actions might upset China. And in August, Gen. Glen VanHerck, NORAD and NORTHCOM commander, disclosed the latest trials of the Global Information Dominance Experiment, in which an AI trained on past global conflict predicts where new ones are likely to happen.
But the FBI and the Department of Homeland Security — two agencies central to domestic terrorism — have shown fewer signs of adopting these models.
Advocates say this reluctance is a mistake. “It’s not perfect, and it can be expensive,” said PeaceTech’s Himelfarb. “But there’s enormous unrealized potential to use data for early warning and action. I don’t think these tools are just optional anymore.”