FiveThirtyEight updated its historical database of U.S. poll accuracy to include national, state and congressional district surveys from 2016 to this year. Here’s how the website’s founder, Nate Silver, summed up the findings:
“Polls of the November 2016 presidential election were about as accurate as polls of presidential elections have been on average since 1972. And polls of gubernatorial and congressional elections in 2016 were about as accurate, on average, as polls of those races since 1998. Furthermore, polls of elections since 2016 — meaning, the 2017 gubernatorial elections and the various special elections to Congress this year and last year — have been slightly more accurate than average.”
Silver’s analysis focused on the performance of 2016 polls in particular, finding that national pre-election polls missed the vote margin between Trump and Clinton by an average of 3.1 percentage points in 2016, lower than the 4.1-point average in elections since 1972.
State presidential polls missed by an average of 5.2 points in 2016, slightly higher than the historical average of 4.8 points. Combining both national and state polls, 2016 errors were slightly lower than the historical norm.
That conclusion might be hard to square with the drubbing polls faced in the days after the 2016 general election, but the findings echo a 2017 report published by the American Association for Public Opinion Research, which found “National polls were generally correct and accurate by historical standards” and that state-level polls showed a competitive contest “but clearly under-estimated Trump’s support in the Upper Midwest.” (Disclosure: I am a member of AAPOR and was part of the committee that produced the report.)
Silver blamed the media for overhyping 2016 polling errors and pointed to two factors that fueled this perception. “Polling of the 2004, 2008 and 2012 presidential races was uncannily good — in a way that may have given people false expectations about how accurate polling has been all along.” He also noted that “error was more consequential in 2016 than it was in past years, since Trump narrowly won a lot of states where Clinton was narrowly ahead in the polls.”
FiveThirtyEight’s findings are consistent with a major study of international polling accuracy by political scientists Will Jennings and Christopher Wlezien in March in the journal Nature Human Behavior.
The analysis used an enormous database of national polls from 351 general elections in 45 countries from 1942 to 2017 to assess whether pre-election polls had actually become less accurate over time.
Jennings and Wlezien wrote “there is no evidence that poll errors have increased over time, and the performance in very recent elections is no exception.” Even in the 11 elections that had occurred since mid-2015, the average polling error in gauging support for large parties was 2.6 percentage points, barely higher than the 2.3-point long-term average.
“The sky is not falling,” said Wlezien in presenting results at a May conference held by AAPOR in Denver
While the recent studies offer reassurances that poll accuracy is not declining over time, their findings also point to useful takeaways for poll watchers heading into the 2018 elections.
The finding that errors in state-level polls are about two-points higher than national polls is a clear caution against overinterpreting a candidate’s lead in U.S. Senate and gubernatorial polls this year.
FiveThirtyEight’s finding that in each election cycle, polls tend to systematically underestimate Democratic or Republican support is a reminder that if polls show a party trailing by small amounts across a range of states, it could still win many of these elections if polls underestimate their support across the board.
Lastly, both FiveThirtyEight and AAPOR’s 2016 report found U.S. polls did not consistently underestimate Democratic or Republican candidates in elections. This makes it difficult to predict how polls will err in future elections, but also suggests there’s no built-in partisan bias in pre-election polls as a whole.
Emily Guskin contributed to this report.