Clinton campaign headquarters on the morning of Nov. 5, 2016. (Philip Bump/The Washington Post)

On the morning of  Nov. 5, 2016, I went to Hillary Clinton’s campaign headquarters in Scranton, Pa. — home town of Joe Biden (who was holding a rally there later that day) and the sort of small urban center in the contested state that seemed like it would be a bellwether for the outcome. As soon as I walked in, something was off: It was empty. There were two volunteers there, seated at a table at the back of the room. But there were no volunteers ready to go talk to voters. It looked like a campaign headquarters on a Saturday in August, not one on the weekend before an election.

When I stopped by Donald Trump’s headquarters later, a bit outside of town, things were more hectic. A dozen people were working the phones, including a woman who drove from New Jersey. Given how much had been written about Trump’s inattention to the ground game (including by me), I was surprised, and my article reflected that.

After the fact, I can look at that moment as significant and revelatory: How’d I miss Trump’s win, having seen what I saw in Scranton? A few days before, I’d even written a story pointing out that Trump was closer to the presidency than he’d been at any other point in the campaign. A few days before that, I’d noted that undecideds appeared to be breaking to Trump, boosting him in the polls. But going into Election Day, I still expected Hillary Clinton to win.

It’s easy to cherry-pick moments and signals that suggested that she wouldn’t, as I did above. Going into the actual voting, though, I was relying on poll numbers that suggested that a Clinton win was likely: A national average that had her up by a few points and state polls that showed her winning where she needed to. When Marquette Law’s poll less than a week before Nov. 8 showed Clinton with a comfortable lead, I tweeted, “That whooshing sound you hear is Democrats exhaling.” If Clinton was holding Wisconsin and running close in Florida (which she was), it was very hard to see how Trump could win.

Unless, you know, the state polls were off. Which they were.

As Election Day approached, there was a site that was putting up consistent warning flags: Nate Silver’s FiveThirtyEight. It got a lot of grief for an article by the site’s Harry Enten on Nov. 4 titled, “Trump Is Just A Normal Polling Error Behind Clinton.”

“Four years ago, an average of survey results the week before the election had Obama winning by 1.2 percentage points,” Enten wrote. “He actually beat Mitt Romney by 3.9 points. If that 2.7-point error doesn’t sound like very much to you, well, it’s very close to what Donald Trump needs to overtake Hillary Clinton in the popular vote.”

On Halloween, another prescient post, from Silver, pointed out that the odds of a split between the electoral college and the popular vote was increasing. “[A]s of early Monday evening,” Silver wrote, “our polls-only model gave Hillary Clinton an 85 percent chance of winning the popular vote but just a 75 percent chance of winning the electoral college. There’s roughly a 10 percent chance of Trump’s winning the White House while losing the popular vote, in other words.”

A 10 percent chance that what happened would happen. Something that almost no one else was predicting.

On Tuesday, Silver and the New York Times’ Maggie Haberman got into a bit of a tiff on Twitter, as journalists do. This is the tweet worth isolating:

It’s important to contextualize Silver’s relationship with traditional political reporters. In 2012, Silver became famous for his forecasting model which suggested that Barack Obama would cruise to victory even as political reporters were describing a nail-biter that could go either way. On Nov. 6, Election Day that year, Silver wrote that Romney had an 8 percent chance of winning a majority of electoral votes based on his modeling of the polls — compared to the 28.6 percent odds FiveThirtyEight gave Trump on Election Day last year.

Even then, though, Silver noted that 8 percent odds weren’t a guarantee. “As any poker player knows, those 8 percent chances do come up once in a while,” he wrote. “If it happens this year, then a lot of polling firms will have to re-examine their assumptions — and we will have to re-examine ours about how trustworthy the polls are. But the odds are that Mr. Obama will win another term.”

As you know, he did — and by a wider margin than expected. This was enormously frustrating to a number of political reporters, some of whom had framed the situation as pitting anecdotal reporting vs. data for the all-important title of “best political prognosticator.” If that was the contest, Silver won in 2012 easily.

When Trump won in 2016 — something that Silver et. al. figured had a 1-in-4 chance of happening — it was an opportunity for those frustrated at the results in 2012 to exact revenge. More broadly, it was a perceived victory for traditional reporting (which had highlighted Trump voters frequently) over the polls, which were portrayed as having gotten the winner wrong. Without intentionally picking on Haberman too much, that’s conveyed in her tweet: “It’s this keen understanding of media and politics that you demonstrated with your own modeling.” Your numbers didn’t get it. Our reporting did.

But that comment misstates what Silver’s team actually did.

Trump’s victory led to an immediate bull market in taunting. People looking to overstate Trump’s surprising victory quickly conflated Silver’s model with other models that had, for example, a 98 percent chance of Clinton winning. (Haberman’s colleagues at the Times had Clinton at 85 percent on Election Day.) That quickly became 99 percent odds, somehow, and it’s nearly impossible to write about polls these days without someone responding on social media with a comment about how “your polls predicted that Clinton had 99 percent odds of winning.”

They didn’t. In fact, most polling was national polling that predicted exactly what happened: Clinton won a fewer percentage points more of the vote than Trump. But it’s also important to note that even FiveThirtyEights’s 71.4 percent odds for Clinton on Election Day — a forecasting model that included many polls — wasn’t actually wrong, as such.

Last week, Silver wrote an essay criticizing the media’s general inability to recognize that a 71.4 percent chance of winning is not the same as a 100 percent chance of winning, a critique that holds for those outside the media, too. FiveThirtyEight had assiduously explained what its models showed and how likely various outcomes would be. But people — many remembering that Silver had said Obama would win in 2012 — simply saw the bigger number for Clinton and figured, “Clinton’s going to win.” And since “Nate Silver” had become synonymous with “election forecasting,” FiveThirtyEight not only got the blame for getting the result “wrong” but also got blamed for other people’s wrong predictions.

Silver’s essay is exhaustive in pointing out the various incorrect assumptions that people made, but none is more potent than this:

[T]he forecast is continuous, rather than binary. When evaluating a poll or a polling-based forecast, you should look at the margin between the poll and the actual result and not just who won and lost. If a poll showed the Democrat winning by 1 point and the Republican won by 1 point instead, the poll did a better job than if the Democrat had won by 9 points (even though the poll would have “called” the outcome correctly in the latter case).

The L.A. Times had a poll that consistently showed Trump winning the election and, after Trump won, many pointed to it as unusually successful in a bad year for polling. But that poll essentially had Trump winning the popular vote by a wide margin, which he didn’t. It got the result right, inadvertently, but was actually one of the worst polls of the cycle.

Haberman’s suggestion that Silver got it wrong is, ironically, a function of both misunderstanding Silver’s math and a dismissal or misunderstanding of how the election was framed at FiveThirtyEight before Election Day.

It’s easy for me now to point to stories I wrote that suggested that Trump might end up winning. I knew he might, going into Nov. 8, but I didn’t think he would, falling victim to that same “71 percent is similar to 100 percent” thinking. The best contextualization for what actually happened, though, didn’t come from me (sadly) or from reporting about Trump voters in the field.

It came from FiveThirtyEight, and it’s not their fault that you didn’t realize it.