Earlier this week, I wrote about a poll testing how people felt about Donald Trump. The responses were swift.

I also received some emails — many of them asking more earnestly — about why I still was writing about the polls after an election in which they were so wrong.

To which I say two things:

  1. The polls weren't actually that wrong. The media's folly in the 2016 election was more about our reading of them. And ...
  2. Even if they are flawed, we can't simply disregard them, throwing the baby out with the bathwater. What we need is more perspective and more healthy skepticism.

To that first point, I refer you to Scott Clement's recap of how public polling fared in 2016. Basically, national polling was about as accurate as it has been for years. Hillary Clinton was favored by an average of about three points, and she won the popular vote by two points — while losing the electoral college, of course.

Where the polling went wrong was in some key states, particularly in one area of the country: the Rust Belt and the Midwest. Here's a handy chart:

The big misses were in Wisconsin, Iowa and Ohio, with poll averages differing from the final margin by six to seven points in each Trump state. In Minnesota, we simply didn't have much data to work with. And in Michigan, the late polls seemed to show a tightening race, even if Trump's narrow win came as a surprise.

And that last point is key: We had polls in Michigan (of varying quality) showing anywhere between Clinton +5 and Trump +2, none of which were statistically significant give their sample sizes (a lead needed to be nearly two times the error margin). But too many of us assumed it was ironclad that Clinton was going to win. I sure did. This belief wasn't only because of the polls, mind you; it was also because of our preconceptions that Trump couldn't win a blue-leaning state like that.

The other important thing that happened is that late-deciders broke for Trump. You can make a credible argument that even in some of these states, the polls weren't as wrong as they seemed; they just couldn't account for those who made their decisions late. And those people broke strongly for Trump.

After looking at Trump's big wins among late-deciders in the decisive states, I calculated the following:

If we grant that the [exit poll] numbers are all spot-on — a hefty “if,” given the wiggle room in exit polls — it would mean Trump in the final week gained about four full points in Wisconsin, 2.5 points in Pennsylvania, two points in Florida and 1.5 points in Michigan.

Those swings were all bigger than Trump's margins of victory in each state — meaning late-deciders could have accounted for Trump's upset win. At the time the many polls were conducted, they were at least accurate in showing Clinton ahead, and then Trump surged at the end.

This isn’t to let polling firms off the hook. While late-deciders may explain part of state polling errors, the systematic understatement of Trump’s support was significant, and it’s important to diagnose why. Still, late-deciders account for portions of some of the bigger gaps.

The point is that it became fashionable to beat up on the polls after the election. Yet a big reason we missed Trump winning is because we were overly certain, based on the polling data and our own preconceptions. Some weren't quite so certain, though. FiveThirtyEight, after all, gave Trump a very real 29 percent chance of winning. Others gave him much less of a chance — as low as 1 percent — but clearly there was enough doubt contained within the numbers that it wasn't as impossible as we seemed to think it was.

Now back to No. 2 above: Even if the polls were off by a point or even a few points, that's simply the cost of doing business. Polls have margins of error, and errors happen. We shouldn't take a two-point or even a four-point lead as gospel.

And on a post like the one I wrote Tuesday, even that amount of error wouldn't really change things that much. When you're dealing with an election, a two to three point shift can change the winner. When you're dealing with Trump's approval rating as president-elect, it's the difference between him being at 37 percent versus 39 percent or 35 percent. That compares to 80 percent who approved of Obama’s handling of his transition in a January 2009 Washington Post-ABC News poll. Unless the error is much larger, the data are still accurate enough to tell us plenty about how Trump is doing and how the American people feel.

And then there's the simple fact that polling provides us our best barometer for how the entire nation feels about something. Trying to quantify how people feel about an issue or a controversy is impossible without it. If we're suddenly to dismiss it completely, we're going to be relying on far less sound measures of popular opinion, like our own social networks (which can be heavily slanted) and anecdotes. The latter can be very helpful in reporting on a political story, but it doesn't tell you how the country feels as a whole.

We should all probably be a little more careful about how certain we are when using polls. But they're still a great tool, and they got a little bit of a bad rap this election.