It’s undeniable that pollsters had some big, high-profile misses in the presidential race. But that isn’t the whole story.
The most prominent error appears to have been in Florida: Polls put Biden ahead of Trump by 2.5 points, and Trump currently has a 3.4-point margin with almost all votes counted. Surveys also likely underestimated Trump in Ohio, where he had a 0.8-point lead heading into the race, but is ahead by eight points with 98 percent of the vote counted. In Texas, Trump is ahead by six points after leading the pre-election polls by only one. And, although Biden won Michigan and Wisconsin, his margin was much thinner than polls suggested.
Down-ballot forecasts based on state-level polling were far from perfect, too.
The most common prediction was that Democrats would end up with between 50 and 52 Senate seats, with a possibility of upsets in red states such as Kansas and South Carolina. The first prediction theoretically could still hold true, as it’s still too early to call key races in North Carolina and Georgia. But confirmed losses in states such as South Carolina, Montana, Kansas and Kentucky, and Sara Gideon’s concession to Republican Sen. Susan Collins in Maine, make clear this was no Democratic blowout. And although Democrats are in a strong position to retain control of the House, they aren’t racking up the margins they hoped for.
Despite all this, the apparent results of the presidential election fall within the “likely range” of outcomes projected by polls and forecasts. Those ranges exist for a reason: For all that some readers treat these mathematical analyses as dispatches from the future, that’s an impossible expectation. The nature of public opinion research, which uses a sample as a stand-in for a larger entity such as a congressional district, state or the whole nation, means that polls are more shotguns than sniper rifles: They don’t have long-range precision, and they produce a spread of possibilities rather than one pinpoint prediction.
Heading into the election, it was clear that late deciders might break for Trump, or that surveys could systematically underestimate him, leading to a narrow Biden victory or a Trump win. And that’s the territory we’re in now: Biden and Trump both have paths to victory, which was always possible given a normal amount of polling error.
And we won’t know exactly how much the polls erred until the vote is fully counted. While we have results in some fast-counting states such as Florida and Texas, Pennsylvania — a key state that polls missed in 2016 — as well as North Carolina, Georgia, Arizona and Nevada have not been called. The national popular vote will also take a while to finalize. California, the nation’s most populous state, is a notoriously slow vote counter. Until results are finalized, any discussion of survey or forecast error in the national popular vote — or any victory claims by pollsters — will be incomplete.
Given all of this, it is far more useful to reckon with the limits of what even the most perfectly executed polling can tell us than to condemn the entire science of measuring public opinion.
Pollsters and analysts can throw the statistical kitchen sink at problems such as election prediction, but human behavior is hard to predict. All those quick to dismiss alternate metrics such as rally sizes or boat parades before Tuesday should take a deep breath before writing off a more rigorous process. Yes, poll design can be flawed, and quantitative models can miss the mark. But they’re still our best tools for understanding the thoughts, views and feelings of American society.
Watch Opinions videos: