This isn’t a scientific analysis. It’s simply how campaigns think, the sort of internalized common knowledge that is certainly worth questioning. Like lawn signs, maybe this has always been wrong. Maybe people having the sense that their preferred candidate is a sure thing somehow makes them more likely to go cast a ballot.
If one were to consider the most recent presidential election, though, you might be forgiven for assuming that this wasn’t the case. Hillary Clinton was widely expected to win: National polls had her up by a few points but projections of the anticipated outcome based largely on state polls showed the likelihood of her winning consistently north of 70 percent. Clinton was considered such a shoo-in that FiveThirtyEight’s estimate that Clinton had about a two-in-three chance of victory was excoriated as being deliberately too generous to Trump. Some of this response was certainly borne of a misplaced belief that, by taking down 2012 election-oracle Nate Silver, you yourself would become the new king of prognostication. But some of it was a genuine belief that Clinton was all-but-assured victory — a belief no doubt bolstered by projections showing her as a likely or inevitable winner.
Clinton, as you may have heard, didn’t win. The national polls were right, but the state polls were off just enough in just the right places (for Trump) to give him an electoral college victory. Why? One reason is that some white working-class voters flipped from the Democrats to Trump. Another reason is that Democrats were more likely to stay home than Republicans.
On Tuesday, Pew Research released the results of a study aimed at determining whether people understand how to interpret probabilities in the context of elections; that is, what it means for Hillary Clinton to have a 67 percent chance of winning. Further, the study ran an experiment testing whether people who believed that a candidate was very likely to win were less likely to then vote.
To the first question, the researchers found that people operating only on information that a candidate had an 87 percent chance of winning were much more likely to assume the candidate would win than people who were told the candidate would get 55 percent of the vote. They were also much more confident about their predictions.
This makes some sense, of course. Winning 55 percent of the vote seems as though it’s close to even, despite it being a relatively big 10-point victory. Eighty-seven percent seems overwhelming, even if it’s measuring the same thing in a different way. (Anyone writing about polling in the aftermath of 2016 has probably gotten feedback along the lines of “are these the same polls that gave Clinton a 90 percent chance of winning??” Such feedback is . . . frustrating.)
One reason that media outlets created probabilities was to give people a better sense of the likelihood of an outcome. The Pew study suggests that this worked, giving people a better sense of who was likely to win — though respondents also underestimated the likelihood of a victory even after being told the actual probability.
The researchers also conducted an experiment in which they allotted participants a certain amount of money to use to cast votes in a series of games. If their team won the vote in a game, they’d receive more money — or lose money if their team lost. Before each game, the players were given both vote-percent predictions and probabilities.
“Results showed that probabilistic forecasts with higher odds of one candidate winning resulted in people not expending resources necessary to cast a vote in the game,” Pew’s Solomon Messing wrote. “In contrast, the size of vote share projections had no detectable effect on voting in the game.”
The larger the distance from even odds, the less likely players were to invest in a vote. The full study compares that drop-off to the final vote projections of a number of media outlets in the 2016 election.
Mind you, the researchers did not argue that overconfidence cost Clinton the 2016 election. They simply noted that in several critical states, Clinton lost by less than a percentage point — and that probabilities showing a 20-point divergence from even odds led to a 3.4-point drop in voting.
A 20-point divergence is about where FiveThirtyEight’s projection ended up.
After the election was over, Silver authored an article titled, “The Media Has A Probability Problem.” If this research maps cleanly onto the real world, so did Hillary Clinton. The campaign was inverted from what it wanted: Most Americans believed it was all but inevitable, but, ironically, it was actually in the sweet spot.