But this does not tell us what will happen. Polls, at their best, give us a rough estimate of a candidate’s support at a given point in time. They show how major issues, loyalties and personal characteristics are shaping voters’ decisions. They cannot predict whether Biden’s lead is truly safe or certain.
Here’s what to know about what polls can tell us, where they can fall short and what’s changed since 2016.
How polls work — and what happened in 2016
U.S. presidential elections are decided by a handful of key states, and polls that cover the whole country or individual states have limits to how precise they can be. All polls have random sampling error, inherent in relying on a sample of the population. And they all have to figure out who will vote and what their preferences are. The best polls are transparent about this.
Some types of people may be more likely to answer a survey than others, which pollsters work to correct for in the way they draw samples and by weighting samples to match population demographics or political characteristics.
But that still has limits: Polls take a few days to conduct, so it’s harder to capture people who decide at the last minute. Late-deciding voters can matter and played a role in 2016, with Trump winning voters who decided in the last week by 29 points in Wisconsin, 17 points in Pennsylvania and 11 points in Michigan, according to network exit polls.
On top of that, many of the best-quality state polls still carry a margin of sampling error of three to five percentage points, which applies to each candidate. Averaging polls can help smooth out random variations, but it can only do so much if polls are missing in the same direction.
A 2017 report by the American Association for Public Opinion Research (AAPOR) found that while national polls had an average error of about two points in presidential elections since 2000 and were fairly accurate in 2016, the error in state polls was double that, at four points. The level of state polling error ranged from 3.2 points in 2004 to 5.1 points in 2016.
It is almost impossible to predict the direction in which polls will miss in a given year. In 2012, some supporters of Republican nominee Mitt Romney criticized surveys during the campaign for overrepresenting Democrats. But final state polls actually underestimated Barack Obama’s vote margin by 2.3 percentage points on average, according to the AAPOR.
The 2012 errors were largely forgotten because most polls showed Obama leading, just by a smaller margin. Yet the 2016 election demonstrated how systematic errors in a variety of states can lead to a surprising outcome. National polls missed by just a single percentage point on average, yet state surveys underestimated Trump’s vote margin by three points in Michigan and Pennsylvania, as well as seven points in Wisconsin.
How pollsters are adjusting
Pollsters have made two significant changes this year aimed at improving accuracy over 2016.
One is simply conducting more polls, a costly decision but one that may contribute to greater precision overall. From the start of September to last week, RealClearPolitics tracked 105 polls in Pennsylvania, Michigan, Wisconsin and Arizona, nearly double the 54 polls over the same period in 2016 in these states. The number of nonpartisan polls conducted with live telephone interviews — a more expensive but historically more accurate method — grew from 24 in 2016 to 36 in 2020.
In addition, more polls also appear to be weighting samples by educational attainment, something that wasn’t done in 2016 and that the AAPOR’s post-election report found contributed to polls underestimating Trump’s support.
Polls are routinely weighted to match estimates of population demographics including race, age and other factors that are correlated with voting. In 2016, there was an especially strong correlation between education and support for Trump and Clinton in key states, with Trump winning by wide margins among White voters with some college or less. Current polls show that correlation remains strong in this year’s election between Trump and Biden. College graduates have long been more likely to participate in surveys, and states that did not weight samples to correct for that — by weighting down the share of responses from people with degrees to their accurate share of the population — were at greater risk of underestimating Trump’s support.
One example of this is Muhlenberg College, which conducts Pennsylvania surveys in collaboration with the Morning Call newspaper. Its final 2016 survey — weighted by gender, age, race, region and party registration, but not education — found 48 percent of likely voters had at least a bachelor’s degree. In 2012, a much smaller share of the commonwealth’s voters had college degrees: 35 percent, according to the Census Current Population Survey.
Muhlenberg College’s polls now weight samples by educational attainment, and its latest survey finds college graduates representing 38 percent of likely voters, much closer to past elections, close to 36 percent in the 2016 Census survey. Biden’s 51 percent to 44 percent lead over Trump in the school’s latest survey is slightly larger than the six-point margin Clinton had in the firm’s final 2016 poll, but the new weighting protocol protects against one clear source of error and gives more confidence in the pollster’s results.
What to know now
Don’t count anyone out.
Polls in 2016 were more accurate than many people think, and in the long run, there’s reason to have confidence in them. Still, there are enough wild cards this year — the coronavirus pandemic, extraordinary political interest, an election like none before and adoption of new voting methods — that could affect the accuracy of pre-election polls.
Take the pre-election vote estimates as advertised: a tally late in the game, with voters in charge of the final score.