Pit Manager Nicole Mavromatis uses a level to check the balance of a roulette wheel at Maryland Live Casino in 2013. (Photo by Linda Davidson / The Washington Post)

If there’s one thing I know absolutely, irrefutably, 100 percent for certain, it’s that people don’t understand probabilities.

This is on my mind because of March Madness, and this new feature at 538 where they not only tell you who is most likely to win but also update the probabilities as the game goes on. While watching the game on TV, you can follow the changing odds on your computer screen while you simultaneously live-tweet the event and text your friends on your smartphone. Ideally, you will do this while switching channels between CBS and TBS, except when the networks show both games on a split screen. Also you should make calls to your bookie. And your psychiatrist.

According to 538, my Gators have a 54 percent win probability Friday night. But Neil Greenberg’s fancy stats column gives the Gators a 62 percent chance of winning. Is that a contradiction? No: Just two different estimates of something innately uncertain and involving multiple metrics of imprecise significance.

That’s my guess, at least.

This shifting-probabilities gimmick at 538 reminds us that probabilities aren’t the same things as predictions. We don’t know how the probability cloud will collapse into a singular reality (sorry to go all quantum physics on you). We live in a world that at both micro and macro levels is chaotic, fluid, and fundamentally — if I may use another highly technical term — squirrelly.

Unfortunately, it’s pretty much impossible to live a normal, emotionally stable life without finding various perches of certainty, belief, faith, conviction, etc. You can’t go around in a probabilistic daze.

Evolution rewards snap judgments. Sometimes you just have to take off running. But we make mental errors all the time. For example, we typically fail to see how low-probability outcomes will become far more likely, if not a certainty, given enough opportunities. We also overestimate the extent to which our direct experience predicts future probabilities. Anecdotes mislead. So do statistical studies with very small data sets. (Here in the science pod we keep on the lookout for studies that turn out to be based on the thoughts of three guys on bar stools.)

My friend Michael Lewis has published a book, “The Undoing Project,” that explores the long collaboration of Amos Tversky and Daniel Kahneman. The Tversky-Kahneman research showed that people are not rational when it comes to probabilities. Consider the “Linda problem.” (Wikipedia has an article on this, titled the Conjunction fallacy.) Tversky and Kahneman ran an experiment in which students were given the characteristics and background of someone named Linda (majored in philosophy, concerned about justice) and then were asked to identify which sentence most likely describes her. “Linda is a bank teller and is active in the feminist movement” was considered by a majority of students to be more probable than “Linda is a bank teller” — even though you can clearly see that the first has to be a subset of, and thus less probable than, the second.

We struggle with probabilities embedded in a low-confidence framework — such as a snow forecast. Earlier this month we prepared for a big snowstorm here on the East Coast. Early computer modeling showed it might be historic — with one model showing 20 inches for the District. Our ace weather bloggers at the Capital Weather Gang wrote a series of posts in which they clearly explained that there were many uncertainties. Then the storm hit and the heaviest snow was out in the middle of nowhere and not in the big cities along the East Coast, and, sure enough, some people complained bitterly that the forecast was wrong. My colleagues acknowledged that it wasn’t a perfect forecast, but was pretty darn good, and in fact I think they did a bang-up job, as always.

Marshall Shepherd published a blog post this week defending the forecast community in general:

Hurricane track forecasts by NOAA’s National Hurricane Center (see below) have significantly improved in the last several decades, and tornado warning lead-times are on the order of 13 minutes. Even with such positive metrics, forecasts will never be perfect. There will be challenges with uncertainty, probabilistic forecasts, inadequate data, coarse model resolution, and non-linearities associated with trying to predict how a fluid on a rotating body changes in time.

Probably we need to talk here about the November election. The pollsters said Hillary Clinton was going to win.

Except they didn’t actually say that. They said she probably would win. As I recall, the number-crunchers at the New York Times put her odds at 85 percent on Election Day. The 538 folks (Nate Silver) said she had about a 65 percent chance. Silver took flak for the low number, and wrote, “I’m kind of confused as to why people think it’s heretical for our model to give Trump a 1-in-3 chance.” After Trump won an electoral college victory thanks to an unexpected inside straight along the Great Lakes, many people howled that the polls were wrong and some pollsters writhed in self-hatred. Well, they were definitely off, particularly in the Rust Belt states that proved crucial to Trump’s victory. They actually got the national popular vote right, but Clinton overperformed in states she already had in the bag and underperformed in the swing states.

The less probable outcome happened because less probable outcomes sometimes do. President Obama understood this, as our Helena Andrews-Dyer reported:

“So I think the odds of Donald Trump winning were always around 20 percent,” he said. “That seems like a lot, but one out of five is not that unusual. It’s not a miracle.”

Speaking of low-probability, high-consequence events, let’s revisit the disaster at the Oscars. What were the chances that something as simple as handing Warren Beatty the right envelope would go disastrously awry? Obviously we can now say in hindsight that the chances were not zero.

The system was designed with failure modes lurking in plain sight. At the end of the night, when there was only a single award yet to be given, for Best Picture, Brian Cullinan of Price Waterhouse Cooper had two envelopes in his hand. One was for Best Actress — already awarded to Emma Stone. That envelope existed because PwC had two copies of every envelope, in matching briefcases on either side of the stage. Cullinan’s partner was Martha Ruiz, also from PwC. They were the only two people in the hall who knew who the winners were. They had memorized them.

The backup plan was just that: Their memories. Their ability to rush out and say, wait, you have the wrong envelope. What the backup plan did not anticipate was the disabling effects of a blunder on national television at the crescendo of the Oscars. The testimony of the stage manager suggests that the PcW employees froze.

Any systems manager, any engineer, has to plan not only for how things will work, but how they will fail, and how that failure might cascade and potentially take out backup safety systems. Remember the BP oil spill: The loss of well control, we eventually learned, not only let gas escape the well and reach the drilling rig Deepwater Horizon, where it caused an explosion, but that initial gas “kick” was so violent that it kinked the drill pipe threaded through the blowout preventer on the seafloor. The kink in the pipe created an obstruction when the blind shear rams in the blowout preventer tried to close in the well. It was like having a bone caught in your teeth.

Improbable? Sure. But not impossible, and there are oil rigs drilling into highly pressured formations all over the Gulf and there’s no way to buy down the risk to zero.

Here’s my bold prediction: Something very unlikely is going to happen one of the days.

Don’t say I didn’t warn you.

Further Reading: