Former French education minister Benoît Hamon waves following partial results in the second round of the French left’s presidential primary election on Jan. 29. (Christian Hartmann/Reuters)

France is only a few months from a much anticipated presidential election. The incumbent, François Hollande of the Socialist Party, has dismal approval ratings and declined to run. The party is struggling even without him.

Drawing on a newly published forecasting model in the latest issue of Science, we can quantify the Socialist Party’s chances of winning, and those chances aren’t good. Our model gives the Socialists a 21 percent probability of winning the first round, and predicts they will be defeated by a margin of about 12 points. The door, then, is clearly open for other parties, including the far-right National Front.

How we did the forecast

There aren’t enough previous French elections to make a good forecasting model from which to make a confident prediction. But we can be more confident if the prediction is based on all direct executive elections in the world since 1945.

In particular, our model includes every election that could be lost — where opposition is legal, more than one party is allowed and more than one candidate is represented on the ballot. The model focuses on two key questions: What are the chances that the incumbent party will win? And what will its vote share be?

This model incorporates many factors — including whether an incumbent is running, how democratic the country is, whether the country is an aid recipient, whether the country has good relations with the United States, economic growth in the year before the election, how long the current party has held office, and several others. For a subset of these elections, we can also incorporate pre-election polling data.

A key strength of this model is that we do not have to rely only on the elections from a single country. Of course, our prediction for France draws on the relevant French economic, political and polling data, but this prediction — and any prediction for a single election — “borrows” information from other elections that have occurred in similar economic and political circumstances.

Here is why that is valuable. In the United States, a well-established correlation exists between economic growth and election outcomes: The better the economy is doing, the better the incumbent party does. But outside the United States, economic factors were only weakly related to electoral outcomes.

This suggests that how economic growth is “experienced,” and thus its political implications, is different across countries with different levels of economic and political development. Economic growth might be politically beneficial because of increased income and wealth, but might also undermine the incumbent government if it brings social disruption, inequality and environmental degradation.

By examining such a wide swath of elections, we can account for different possibilities. Because elections occur with such regularity around the globe, they give us much more information on which to base predictions.

But why should we trust a forecast?

What sparked our research — which has now been underway for three years — was a debate in Foreign Policy between Michael Ward and Nils Metternich and Jay Ulfelder about whether it was possible to generate a reliable global forecasting model of presidential elections.

Ulfelder was skeptical, and our research confirms the need for caution. We have found biased pollsters and, therefore, imperfect measurements of public opinion.

But as Ward and Metternich suggested, these problems are not that severe. Modern statistical techniques can adjust and improve on imperfect measures.

After the Brexit vote, the failed peace treaty referendum in Colombia and the last U.S. presidential election, many pundits declared quantitative forecasting of elections dead. Similar claims were made in 1948, after polls predicted that Thomas Dewey would defeat Harry Truman in the presidential election.

Then, as now, these pundits miss the point. Polls and statistical models of elections will never perfectly predict the future — they’re not magic — and scholars will need to continually refine these tools and develop new ones.

But compared with the subjective assessments of the typical pundit, quantitative forecasts perform quite well. The reports of the death of quantitative electoral forecasts are greatly exaggerated.

Stefan Wojcik is a data scientist and researcher at One Earth Future Foundation, a PhD in political science from the University of Colorado, and until recently a postdoctoral affiliate of Northeastern and Harvard Universities.

Ryan Kennedy is an associate professor of political science, founding director of the University of Houston Center for International and Comparative Studies (CICS) and a research associate with UH’s Hobby School of Public Affairs.

David Lazer is distinguished professor of political and computer science, co-director of the NULab for texts, maps and networks at Northeastern University, affiliate of the Institute for Quantitative Social Science at Harvard University and co-founder of Volunteer Science