Donald Trump acknowledges photographers after speaking at a campaign rally in Baton Rouge, La., Thursday, Feb. 11, 2016. (AP Photo/Gerald Herbert) (Gerald Herbert/AP)

It’s rare for an election to raise a metaphysical question — and even rarer for Donald Trump to do so. But that is exactly what he has done by repeatedly confounding expectations of his electoral demise: He has rattled our conception of how knowable the future is.

Pundit predictions are notoriously poor, but last fall, there was near-unanimity among political analysts that Trump would fail, and fast. Nate Silver, the statistical wunderkind who made his reputation by accurately calling elections using poll-driven models, said that Trump’s base of support was “about the same share of people who think the Apollo moon landings were faked.”

Now that voters have actually weighed in, Trump has won the New Hampshire primary after finishing second in Iowa. His success has been so astounding that, as Jack Shafer wrote in Politico, it looks a lot like what Nassim Nicholas Taleb famously dubbed a “black swan” — an enormously consequential event that is unpredictable but seems foreseeable, even obvious, in hindsight. According to Taleb, “A small number of black swans explain almost everything in our world.” The 9/11 attacks were a black swan, the bursting of the housing bubble was a black swan and so is Trump’s credible shot at the presidency.

Putting aside the (many) earthly worries about a Trump administration, the epistemological problem with Trump’s campaign is that it seems to reinforce Taleb’s logic: Most things that matter can’t be forecast, and most things that can be forecast don’t matter. Our ability to understand the world around us, or at least the world ahead of us, is limited to the trivial. If you’re a columnist covering the election, a general charged with anticipating the next war or just an average person trying to plan your life, such pessimism is disheartening.

Fortunately, it is overblown. True black swans — events that are unforeseeable because they are unimaginable — are exceedingly rare. If we can imagine the conditions under which things could occur, we can use probability to estimate their likelihood, at least roughly. Then we can test how right or wrong we were and adjust later predictions to make them more accurate. All of which means that it’s much easier to predict political bombshells a la Trump than you might imagine.

Hecklers are common at political rallies, but Donald Trump knows how to use them to fire up his audience. (Peter Stevenson/The Washington Post)

The prospect of a President Trump is closer to a gray swan than a black one — and it offers a valuable opportunity for learning just how much we can and cannot know about the future.

At one end of the epistemological spectrum is the deterministic universe proposed by 19th-century French mathematician Pierre-Simon Laplace. Working in the midst of the Enlightenment, Laplace posited that rules such as Newton’s laws of motion might govern all of nature, including its human inhabitants. An entity that knew all the rules could, in principle, extrapolate from the current state of the world and see precisely into the distant future: “For such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” Laplace called this intellect a “demon,” and it’s not hard to see why: It offered the possibility of omniscience, but it left no room for free will.

At the other end of the spectrum lies the universe as scientists see it now, one permeated by tremendous uncertainty. Sure, thanks to Newton and Einstein, we know that objects behave reliably at the macro level. But quantum mechanics tells us that matter behaves unpredictably at the subatomic level, and although the social sciences can explain some of the funny things people do, we’re a long way from a unified theory of human behavior.

This uncertainty is uncomfortable. Which is why many people prefer a universe where the divine moderates the tension between determinism and free will — where the natural world operates according to scientific laws but human endeavors are guided by some sort of master plan. Infusing events, even horrible ones, with meaning feels more reassuring than “things happen.”

Probability is the humble secular answer to that problem. It allows us to transform vague, anything-could-happen sentiments into measurable risk, and it is essential to finance, medicine, engineering and more. Without ways of quantifying risk, decision-making would come to a standstill — or grind unproductively against a logical wall.

Consider the weather, which operates according to deterministic laws but is unpredictable more than a week out, because even small changes have exponential effects that build on one another. It’s an example of a chaotic system. If you had perfect input data and infinite computing power, you could perhaps predict the weather perfectly. But because meteorologists aren’t Laplacian demons, they frame their predictions as probabilities, which yield useful information about the future despite epistemological limitations.

Of course, politics is not weather: It’s a complex system with lots of variables governed by rules that are unclear.

That is why, according to Taleb, a normally useful tool like probability can’t help us see disruptive events like Trump’s candidacy: “Political and economic ‘tail events’ ” — that is, rare but high-impact events — “are unpredictable, and their probabilities are not scientifically measurable. No matter how many dollars are spent on research, predicting revolutions is not the same as counting cards; humans will never be able to turn politics into the tractable randomness of blackjack.”

It sounds convincing; in politics, there are so many moving pieces that, for all intents and purposes, every historical event is unique. You can’t provide a frequency-based probability for something that has never happened before. What would you base your odds on? There is only one Donald Trump.

Or is there? Trump fits into the comparison class of system-destabilizing populists — from Huey Long to Ross Perot — pretty well. Just because politics is a complex system doesn’t mean we can’t make (and improve) political predictions. Indeed, assigning numerical odds to an event, even if doing so requires some guesses, improves the quality of political forecasts in the long run. Improving 10 or 20 percent on the proverbial dart-tossing chimp is still progress.

That is the key discovery one of us, Tetlock, made when IARPA — the agency that funds cutting-edge intelligence research — asked him in 2010 to participate in a geopolitical forecasting “tournament.” Each team was led by a scholar who could recruit, train and organize forecasters however they wished, but they all had to predict answers to the same questions, such as “Will the euro fall below $1.20 in the next year?” or “Will the president of Tunisia flee to exile in the next six months?”

Tetlock’s forecasters did extremely well, and within a few years, the best of them — a few hundred ordinary Americans — were even out-predicting career intelligence analysts (who had access to classified information) by about 30 percent. They were anointed “superforecasters.”

What Tetlock and his colleagues did was teach them to think probabilistically. Humans are not “natural statisticians,” as psychologists Amos Tversky and Daniel Kahneman have noted; we prefer to think in terms of narratives, even unfounded or inconsistent ones. So there were great benefits in learning rough-and-ready statistical concepts. It turns out that thinking probabilistically increases the odds of accurate predictions.

But then why did Silver, one of the most probabilistically astute observers of American politics, get Trump wrong?

The short answer is that he didn’t. In September, Silver gave Trump a 5 percent chance of winning the Republican nomination, and we don’t yet know who the nominee will be. But let’s say Trump wins. Does that make Silver wrong? Not necessarily: If we could rerun history 100 times, maybe Trump would lose 95. Of course, that experiment is impossible, which raises another metaphysical quandary: If predictions can never be declared right or wrong, how does probability help us navigate the future?

The answer lies in measuring a forecaster’s performance over many predictions. Do the things you say will happen 5 percent of the time actually happen about that often? Do you assign high probabilities to events that happen and low probabilities to those that don’t, as opposed to playing it safe with middle-of-the-road predictions? By answering these questions, we can find out whose forecasts are generally the most accurate — even if we can’t say they were “right” — and use the results to refine our beliefs and plan for the future.

Individuals, businesses and policymakers often face choices involving competing priorities and limited resources. Probabilistic predictions, especially from forecasters who have proved their accuracy over time, can enable better decisions, and even small improvements in predictive ability can mark the difference between danger and security, recession and growth, war and peace. Imagine that the intelligence community had been more circumspect in 2002, saying there was a 75 percent chance that Iraq had weapons of mass destruction (and a 25 percent chance it did not) instead of bluntly stating, “Baghdad has chemical and biological weapons.” Would Congress still have authorized the use of force? No one knows for sure, but lawmakers might have been more cautious. Decreasing the odds of multi-trillion-dollar mistakes is not something to sniff at.

What about supposed black swans, though? It’s true that judging the accuracy of forecasts involving extremely unlikely events is harder, because they could take decades or even millennia to play out. But there are still standards we can use to benchmark those odds, especially compared with other unlikely events. So even if we can’t assign an objective probability to an alien invasion, we can presumably say it’s less likely than, say, war with Russia and prepare accordingly.

A purely black swan is, by definition, a completely unforeseeable event, and there are relatively few of those. The 9/11 attacks are often cited as an example, but there were many data points suggesting that al-Qaeda wanted to attack the United States and that terrorists might use airplanes as weapons. (Tom Clancy had even published a book in which a pilot intentionally crashes a jetliner into the Capitol.) As the 9/11 Commission Report put it, the attacks “were a shock, but they should not have come as a surprise.”

Likewise, the intelligence community considered the possibility of the Soviets placing missiles in Cuba, of Islamists overthrowing the shah of Iran and of the Soviet Union collapsing under the weight of communism. That does not mean that its forecasts were accurate! But if these scenarios were imaginable, then they were predictable in a ballpark probabilistic sense. And the accuracy of those predictions could have been used to refine the intelligence community’s models of the world.

Prediction is not positivism: We need to be humble about what we know and what we don’t know — and always remember that a probability is just that. There are limits to our foresight, but better prediction can reduce the uncertainty that erodes confidence in the future. Trump is wrong: America doesn’t need to be made great again. But prediction just might make it better.

Twitter: @PeterScoblic

Read more from Outlook and follow our updates on Facebook and Twitter.