Democracy Dies in Darkness

# Nate Silver: ‘There were some models that gave Trump as little as a 1 or 2 percent chance’ of winning in 2016

Nate Silver, 42, is founder and editor in chief of FiveThirtyEight, owned by ABC News. He correctly predicted the outcome in 49 of the 50 states in 2008's presidential race and in all 50 states in 2012. He is the author of "The Signal and the Noise: Why So Many Predictions Fail — But Some Don't."

People have called you a forecasting guru, a hero to nerds, a disrupter, a data journalist. How do you describe what you do?

I’m a journalist and an applied statistician, basically. And trying to explore the overlap between those two things. There are lots of things I don’t do well, but I am actually good at building statistical models that take a complex process, like the presidential primaries or the NBA season, and represent that mathematically. And it’s not just a matter of plugging some numbers into a computer; you’re trying to actually create a model of the real world. And the real world is complicated.

Before the 2016 presidential election, your record was pretty spot-on in forecasting who would win. And you were closer than most on the 2016 election.

There were some models that gave Trump as little as a 1 or 2 percent chance, and we had him with a 30 percent chance — 29 percent, I think. Those are really different answers. One is saying, look, Trump’s going to win the election about as often as a good baseball player gets a base hit. And one is saying, this is a once-in-a-blue-moon scenario. We were quite emphatic that the election was competitive, and that Trump had a chance.

If you were betting the line in Vegas, then you would have used our model to lay a lot of money on Trump. Even though he wasn't a favorite, our model said that Trump is significantly underpriced and that, on average, you'll double your money by betting on Trump. Even though you lose most of the time, you make so much when he wins. And so, to us, the fact that Trump won this kind of narrow electoral college victory was exactly the scenario that our model identified as the reason he was more likely to win than people assumed. Because we'd done the historical work and the data work and the reporting work to actually kind of think through these things a little bit more deeply.

I was wondering if that election brought you a different way of looking at things?

No. Because from a polling standpoint, 2016 was an extraordinarily boring and ordinary and normal election. From every other standpoint, it was remarkable. And not necessarily remarkable in a good way, depending on how you felt about the outcome. The one thing that said Donald Trump could become president was the polls. The polls showed him within the margin of error. The polls showed him, by the way, winning the Republican primary pretty much after the first month.

So the forecast is objective, but the way people look at probabilities is very contingent on what their assumptions are. The same person can look at a 70-30 probability, depending on their assumptions, and say, “Okay, this proves that such-and-such is going to happen, thank God.” Or they can say, “Oh my gosh, there’s still a 30 percent chance of this happening.” You know, imagine if you’re boarding a flight and the flight attendant says, there’s a 70-30 chance that the plane will crash. It’s not terribly reassuring, right? So in some ways, you want people to actually trust their gut less. [Laughs.] You want people to say, look, maybe you can’t see how Trump wins the election or Bernie Sanders wins the primary or whatever else, but just be aware that, historically, this is something that could happen.

If you were to apply your analysis to another area, what would that be?

The more I do this, the more focused I get. One misconception people have about the big-data-slash-analytics world is, if you know how to do analytics, you can solve any problem. I mean, not really, right? You still need a lot of domain knowledge about the field you’re trying to study.

I play poker on the side, right? A pretty on-brand activity. Well, there are people who specialize in poker for a living. They probably have the same types of intelligence that I have, but spend their whole life devoted to it. So I’m probably better than the average person off the street, but I get my a-- kicked by dedicated professionals.

Elections are actually really tricky to cover because they require a lot of specialized knowledge. And there’s a lot of expertise that has historically been missing from campaign coverage. What happens, I think, is when people are smart but don’t necessarily have expertise, they tend to fill in the blanks with stuff that’s speculative. That’s the kind of polite way to put it, right? The impolite way is that you fill it in with a kind of B.S. that sounds good at a cocktail party, but which doesn’t necessarily hold up that well to scrutiny. I almost feel like we’re a journalistic throwback, saying, “Let’s take a step back and give reliable, unbiased information to assess a situation.”

This interview has been edited and condensed. KK Ottesen’s latest book is “Activist: Portraits of Courage.”