In 2020, polls appear to have overconfidently predicted that Joe Biden would handily defeat incumbent President Trump.
What went wrong with the 2020 pre-election polls
The American Association for Public Opinion Research (AAPOR) advised pollsters to make a number of corrections after 2016, including weighting the data to better represent White voters without college degrees; conducting more and better polls to be “averaged” right before the election; and better accounting for the fewer undecided voters and those leaning toward third-party candidates. But apparently these corrections did not work or were not widely enough made.
Crucial state polls were significantly off once again, especially in Michigan, Wisconsin and Pennsylvania. Yes, Biden won these states. But he did so by thinner margins than were found in pre-election polling results, which steadily forecast him winning by 4 to 5 percentage points or more. Further, Trump defeated Biden in states that were allegedly close, like Florida and Texas, by handier margins than expected. In Arizona and Georgia, the polls were within sampling error margins. But they were way off on several congressional races. Maine’s Republican Sen. Susan Collins won handily, despite pre-election polls showing her opponent leading. And while some expected a “blue wave” election that increased Democrats’ control of the House, Republicans gained House seats.
What went wrong? For one, polling was conducted during the pandemic. Many states were voting early or by mail in large numbers for the first time. They then had to count these votes under crisis conditions for voters, Postal Service, and state elections administrators. Disproportionately Democratic poll respondents reported that they were going to vote by mail; some may have failed to do so, ran into difficulties, or had their ballots lost in the mail, contributing to polling error.
But here’s the challenge for pollsters: knowing who will actually vote compared with who is responding to polls. Pollsters have to estimate the composition of “likely voters,” based on what’s happened in the past, then accurately weight respondents’ answers for comparable demographic characteristics.
It will need to consider the same potential problems discussed during 2016. These include whether some respondents were “shy Trump voters,” reluctant to disclose their intentions, or whether Trump voters were less likely to respond to polls, which pollsters call “nonresponse bias.” This may not have just meant underestimating the number of Whites without college degrees who were likely to vote, but also underestimating likely voters in rural or small-town areas, who voted overwhelmingly for Trump.
Some Republican-oriented polls may have done better. They may have weighted their data differently than did other pollsters, adjusting for Republicans’ non-responses, or weighted reported Trump support more to offset “social desirability bias,” or respondents’ desire to say what they believe an interviewer wants to hear.
None of that affects public opinion polling, which is quite different
But while election polling definitely has problems that need to be studied, some pundits are claiming that public opinion polling has failed entirely. That’s not so. Mass opinion polling is a very different animal from election forecasting polls. It occurs regularly between elections, examining all manner of political and social attitudes and behaviors. And it’s reliable and useful for political scientists and others who study U.S. democracy.
So what’s the difference? In pre-election polls, pollsters must estimate who will vote. In mass public opinion polls, pollsters don’t have that problem. Survey samples can be effectively weighted to match census data about the entire adult U.S. population and its subgroups, drawing on an enormous research literature on trends and patterns in public opinion, including partisan conflict today.
Some might object that here, too, Republicans or Trump voters might fail to respond, skewing estimates of the public’s opinions. But they would be a smaller proportion of the public in a mass survey conducted outside of a heated election campaign. On things unrelated to the election, that group has less reason to avoid pollsters any more than the U.S. public in general. That avoidance mattered in election polls, because the difference between whom Democrats and Republicans supported for president was fully 90 percentage points. But when we’re looking at the U.S. public at large, including nonvoters, there are fewer strong partisans. What’s more, on issues other than whom they’ll choose for president or presidential approval, partisan differences in opinions are much smaller on average, roughly 36 points in one important study.
In other words, the opinion research community does need to continue examining possible sources of error in both election polling and mass surveys. And of course, it needs to encourage transparency in conducting polls and in archiving them for further scrutiny and research. But mass public opinion polling is alive and well.
Robert Y. Shapiro is the Wallace S. Sayre professor of government and professor of international and public affairs at Columbia University, president of the Academy of Political Science, and chair of the Roper Center for Public Opinion Research.