Blaming the polls for getting the 2016 election so wrong is understandable, but there was arguably a bigger problem: the false confidence that the polls inspired. Indeed, the flood of polls may be having a perverse consequence: making voters worse at predicting the election, not better. And the media landscape isn’t helping.
In the past, voters were good at predicting election outcomes.
Since 1952, the American National Elections Studies (ANES) has asked people to say who they thought was likely to win the White House and whether that candidate was likely to “win by quite a bit” or in a “close race.” As media polls became a staple of elections coverage in recent decades, the percentage unable or unwilling to offer a prediction has declined from almost 25 percent to less than 1 percent.
Meanwhile, predictions about the presidential race have tracked the popular vote closely:
But in 2012, an unusually high number predicted Obama or Romney would “win by quite a bit.”
Something changed when Obama and Romney faced off. In earlier elections, the percentages predicting a “close race” or a “win by quite a bit” varied predictably with the closeness of the race.
In 2012, however, voters were far more confident — with large percentages believing that either Obama or Romney would win by quite a bit. You can see this in the graphs below, where the 2012 data points sit far away from the others.
These confident 2012 predictions are particularly striking given that polls showed a very narrow Obama lead.
Better educated people were the most overconfident
Moreover, 2012 stands apart because the most educated people were the most confident, unlike in earlier elections that were similarly close.
In these earlier elections, people with more formal education were less likely to predict a bigger win. In 2012, this pattern was reversed:
In other words, the people most attentive to politics — and perhaps best equipped to understand the limitations of polls — were the most overconfident. The impact of education persisted even after accounting for other factors such as vote choice, party identification, knowledge of politics, age, gender and self-reported interest in election news.
The news media amplifies the polls — and therefore overconfidence
I have interviewed several dozen political journalists and pollsters about how polling is used in news coverage. Almost everybody said they regularly followed polling aggregators and forecasters like the Upshot, FiveThirtyEight and Huffington Post. Many expressed hope that by paying attention to averages, it would limit cherry-picking of outlier results and make coverage of the horse race more accurate.
At the same time, there is concern about the attention paid to poll aggregators and forecasters. One reporter said that “It’s almost from about six weeks out from any election people start saying, ‘It’s over.’”
It was no surprise, then, that the Romney campaign in particular pushed back against forecasts showing they would most likely lose. As one reporter told me:
“If you had asked Stuart Stevens, the Republican, the chief strategist for Mitt Romney, and I did, I asked him multiple times what do you make of FiveThirtyEight showing Romney only having like a 20 percent, 30 percent chance of winning? And he would always just roll his eye and make fun of it and stuff. And the campaign was actively pushing back against Nate Silver because it was such a … he was guiding the common wisdom that Romney was going to lose.”
In 2016, the media’s fascination with polling-based forecasts may well have created false certainty about the outcome, which in turn altered the way the campaign was covered.
Previous research has found that perceived front-runners receive more negative coverage than candidates who are behind in the polls. Thus, the perception that Clinton was the clear front-runner may have affected the coverage she faced.
Clinton’s apparent lead may also have caused news organizations to discount or otherwise qualify anecdotal evidence of Trump’s support in rural areas — evidence that may have indicated the truly competitive nature of the race.
To be fair, the news media alone are not solely to blame. The larger media landscape, which has spawned a cottage industry of armchair poll analysis, may also be a factor. The constant debate about polls on social media makes it easier than ever to select only the poll numbers, however flawed, that reinforce your own preferences.
Because poll results and forecasts are numeric — reported to the fraction of a percentage point — political observers often mistake these estimates as exact. As one media pollster I interviewed put it, “If it’s a number, it’s precise. It’s $1.39. It’s 34 percent. And it’s hard for people to get into their heads that there’s imprecision around that.” Ultimately, quantification may make it more difficult to appreciate uncertainty.
The challenge facing political journalists is not only in cataloging, analyzing and evaluating a complex array of available data but in not losing sight of just how much these data sometimes leave out.
Benjamin Toff is a research fellow at the Reuters Institute for the Study of Journalism at the University of Oxford and will be an assistant professor of journalism at the University of Minnesota beginning in 2017.