Election forecasting season is underway. FiveThirtyEight has released a model that gives Democratic presidential nominee Joe Biden a nearly 75 percent chance of victory. The Economist’s model is more bullish, giving the former vice president an 87 percent probability of taking the electoral college and a 97 percent chance of prevailing in the popular vote. Political scientist Alan Abramowitz used a completely different system, based presidential approval polls, and gave Biden a 70 percent chance of winning. And, as Election Day draws closer, more analysts and academics will jump in and issue their own quantitative predictions.

Mathematical election predictions can seem both tantalizing and frustratingly opaque. People want to know what will happen, especially in an election as consequential as this one, but they often don’t understand how the people who build these predictive tools reach their conclusions. They may be dazzled by the presentation and the numbers, but they’re not sure exactly what those images and figures mean. And most of all, lay readers may be confused by seemingly credible forecasts that come up with wildly divergent results.

But there is good news: You don’t have to be a stats nerd to follow models without driving yourself crazy. Over time, I’ve developed a few rules that anyone — with any level of math experience — can use to better understand what these pre-election forecasts mean and how to think about them.

The first rule of model reading: Don’t check your brain at the door.

Models often captivate readers with beautiful charts and precise percentages, making it easy to treat forecasts as truths plucked from the mind of an all-knowing, all-powerful, loving Science. But models are just theories. Modelers believe, based on past data, that the president’s party benefits when the economy is good; that a polling lead is a good but imperfect predictor of victory; that polls can be off. So they use past data to translate those ideas into mathematical formulae, then use that math to gauge the likelihood that each candidate will win. There’s no secret sauce; just data-driven theories about the election.

It’s appropriate and healthy to stress test these theories. If a model is backed by a nonsensical statistical correlation — such as the relationship between The Washington Football Team’s performance and presidential election results — it’s okay to chuck it. If a model produces an outlandish result, like a near-certain victory for either candidate when Election Day is more than two months away, that model is suspect. You should also doubt models that have ridiculous implications. For instance, American University history professor Allan Lichtman’s widely circulated “13 keys” model predicts that Biden will certainly win the presidential election. But tiny switches — like giving Trump the win on Lichtman’s subjective “charisma” factor and arguing that Trump had a “major foreign policy success” — would flip the result and make Trump a surefire winner. You don’t need an advanced math degree to know that doesn’t make sense.

The second rule: Watch out for events that blow up old patterns.

Good models are fundamentally empirical: They find patterns in past data and use them to predict the future. Empiricism undergirds essentially all modern science, and intellectually it’s the only game in town. A nonempirical prediction is better known as a wild guess.

But empirical models can only see what they’re designed to see. As Nate Silver, maybe the most famous modeler in American politics, pointed out when he unveiled his 2020 model, even the most rigorous empirical model can’t look inside Trump’s mind and detect whether he’d try to hold on to power by extraconstitutional means. There’s just no way to gather that sort of data on Trump’s decision-making and responsibly fit it into a model because there is no modern precedent to compare him to.

And maybe more important, models can’t always handle history-breaking elections. A model that used early endorsement tallies to predict the winner of presidential primaries would probably have a decent track record of picking the nominee, until 2008, when Barack Obama rewrote the playbook by winning over voters first and earning establishment support later.

Frustratingly, it’s impossible to know ahead of time if we’re in a rare, unprecedented election that breaks all our models or a normal one.

Which brings me to the third rule: Take good models seriously.

When you find a good model — one that’s built on a strong theory, uses historical data, has a decent record of past predictions and issues sane forecasts — go ahead and allow it to influence your beliefs. During the 2016 election, many political professionals believed that Hillary Clinton was a lock for the White House, and ignored (or tried to pick apart) well-built models from FiveThirtyEight and the New York Times’s Upshot that said Trump still had a 15 to 30 percent chance of winning. We all know what happened next.

In 2020, taking good models seriously means taking Trump seriously. Biden has the edge in major models, but he doesn’t have the race locked down. Democrats would be foolish to assume that Trump can’t win and Republicans would be wrong to think that Trump is the favorite based on boat parades or wild hope.

This race, like the 2014, 2016 and 2018 elections, has a clear front-runner. But the models don’t and can’t guarantee that he’ll be the winner. Ordinary voters and political strategists should act accordingly.

Sign up for The Odds newsletter for election updates from David Byler.

Read more: