People wait for food on July 3, 2016, after being displaced by fighting in Wau, South Sudan. The country has seen a number of cease-fire agreements brokered — and broken — since 2014. (Charles Lomodong/AFP via Getty Images)

When a cease-fire is imminent, pundits tend to chime in with dramatic speculations and predictions. Some commentators try to reassure the public that the truce will almost certainly hold. Others sound more like Leonard Cohen, whose song “The Future” warned us “I’ve seen the future, brother: It is murder.”

Statistically, optimists are more often right than pessimists, but only by a narrow margin. A 2012 study by University of Uppsala researcher Stina Högbladh shows that 125 out of 216 agreements signed between 1975 and 2011 remained intact, without violence recurring within five years.

We wanted to see whether experts really did a better job of forecasting cease-fires

But are these experts better than coin flippers? Maya Hadar, Naomi Bosler and I compared the accuracy of forecasts derived from newspaper editorials and financial markets.

In a new article, we examined how often the editorial commentators of three dailies — Haaretz, the Jerusalem Post and the New York Times — correctly predicted the success of 24 cease-fires in the Middle East. Then we contrasted these forecasts with the stock market returns from two companies — Dan Hotels and IMCO — at the conclusion of the agreement.

Here’s the logic: Financial markets often react sensitively to political developments — if traders anticipate a cease-fire will be unsuccessful, profits on assets from the tourism industry (Dan Hotels) should be reduced, while the defense sector (represented by IMCO) might experience a rally.

We believe that the financial community is in a much more precarious position than the journalists who assess the prospect of a cease-fire. After all, stockbrokers risk losing money, or even their jobs, if they do not predict the future well.

A media pundit, in contrast, is someone whose income depends on the ability to dramatize an event or to meet the ideological bias of the editor or the audience. In other words, the actual correctness of his or her statements isn’t so critical.

The wisdom of the crowd — does it apply here?

The markets have another thing going for them — the “wisdom of the crowd” effect. This regularity, identified by Darwin’s cousin Francis Galton, says the aggregate forecasts of laymen can be more accurate than the predictions of individual forecasters. Prediction markets, for instance, use the collective wisdom of many participants to forecast elections, among other outcomes.

Here’s how we tested our approach. Using a keyword search on a massive collection of political events, we identified 24 cease-fires in the conflicts Israel has fought with various Palestinian factions and other groups from 1993 to 2014. To gauge forecasts from journalistic texts, we evaluated the prose of the editorials and other commentaries with a fine-grained classification scheme derived from communication studies. Our examination categorized commentaries as a prediction of cease-fire success if the overall tone of a commentary was optimistic.

We used techniques developed in financial econometrics to assess whether the returns on the two assets at the conclusion of the cease-fire developed differently in comparison with “normal” market activity and thus less politicized periods. The study classified a stock market reaction as a positive prediction for the stability of the cease-fire if the extra return was positive for the tourism stock and negative for the defense industry firm.

Figure 1, below, reports how accurate these forecasts were. We considered a cease-fire “successful” when the conflict did not experience more than 14 violent events during the first 14 days after the truce announcement.

Figure 1 — Who can predict cease-fire successes better? We distinguish between the accuracy, the recall and the precision of the three newspapers, and what the financial markets said about two companies. “Accuracy” calculates the number of correct predictions divided by the number of predictions made. “Recall” is the number of correctly predicted successes divided by the number of successes, and “precision” divides the number of correctly predicted successes by the number of predicted successes. Scores on these benchmarks higher than 55 percent suggest that the source provides reasonable predictions. Not all newspapers commented on all cease-fires, so we needed to differentiate between Accuracy 1 (correct predictions out of the 24 cease-fire events) and Accuracy 2 (correct predictions out of the number of predictions made by the individual newspaper). Source: G. Schneider


One of the assets (Dan Hotels) provides the highest accuracy and recall scores, while the second (IMCO) did not perform as well but still had reasonable accuracy scores. The precision score of the forecasts derived from the tourism industry asset is, however, lower than the one that emanated from the three dailies, but its predictions of cease-fire success were still correct in 80 percent of the examined cases.

The good results, especially by the Jerusalem Post, are most likely a consequence of a tendency to under-predict successful cease-fires. That paper perfectly predicted six successes, while there were 16 cases that can be classified as successes, according to our criteria.

The results confirm our suspicion

Yes, it looks overall like stock markets can produce more accurate predictions than media pundits.

We are not the first political scientists who warn against the dramatic forecasts of experts. A group of political science and law professors, for instance, showed that statistical features of Supreme Court cases lead to better predictions than the collective expectation of legal experts.

Philip Tetlock, a University of Pennsylvania professor, famously nurtured the skepticism that pundits are not good forecasters. Together with Michael Horowitz, he likened pundits to dart-throwing chimps. In a 2015 bestseller written with Dan Gardner, Tetlock qualified this position, adding that unbiased individuals can be trained to become “superforecasters.”

Yet, it seems doubtful that the political game can be altered so that politicians will gravitate toward the predictions of these “foxes” rather than to the “hedgehogs” — the pundits. To legitimize their actions, governments often need to rely on either-or forecasts that the traditional experts provide.

Still, we can influence the collusion of interests between the oracles and politicians in two ways. First, the scientific community can, as Tetlock and others have shown, provide sober alternative predictions that challenge the dramatizing predictions utilized by politicians and media consumers. Second, repeated evaluation of media predictions’ accuracy will help us to identify the experts who have a track record of faulty forecasting. This could encourage, or even force, governments to give up listening to false prophets.

 Gerald Schneider is professor of international politics at the University of Konstanz in Germany.