(David Goldman/AP)

Where were you on the night of Nov. 8, 2016? If you’re like many political junkies, you were watching election night coverage and wondering not whether Hillary Clinton or Donald Trump would win but might Clinton do so well that she’d win in places like Texas and Arizona.

When she lost, many on both sides of the aisle were shocked. After all, forecasters gave her odds of winning that ranged from 70 to 99 percent. These statistical win-forecasts are increasingly prominent and widely shared — thanks in part to the work of sites like FiveThirtyEight, the Huffington Post, the New York Times Upshot and the Princeton Election Consortium.

How exactly do people understand these forecasts? Could this widespread confidence in a Clinton victory be due in part to increasing coverage of win forecasts?

These questions motivated my recent paper with political scientists Sean Westwood and Yphtach Lelkes. What we found is that the presentation of election forecasts can have serious consequences for the way the public understands elections, and whether they participate.

Here’s how we did our research

Election forecasts can be presented in two basic ways. One conveys each candidate’s chance of winning. Here, for example, is FiveThirtyEight’s rolling presidential election forecast of the “chance of winning,” from June 2016 to Election Day:

Another way to present a forecast is each candidate’s expected share of the vote. Here is FiveThirtyEight’s vote share forecast for 2016:

We conducted simple experiments in which Americans were shown forecasts for hypothetical candidates. Some people saw a candidate’s projected chance of winning. Others saw a vote share forecast. These experiments involved hypothetical candidates, not Clinton or Trump. But the results were striking nonetheless.

It is hard for people to understand election forecasts

First, some people struggled to understand exactly what the chance of winning implies. In fact, almost 1 in 10 confused the chance of winning with the vote share — thinking that, say, a 71 percent chance of winning was the same as winning 71 percent of the votes.

Second, when people saw a forecast expressed as a chance of winning, they were more confident that the candidate depicted as being ahead would win — compared to those who saw only a vote share forecast.

The central challenge is that “the chance of winning” depends both on how far ahead one candidate appears in the polls and the uncertainty surrounding that lead, which is often conveyed as the margin of error.

But many people are not used to thinking about statistical uncertainty and how it relates to the chance that a single event will happen. For example, in our experiment, people’s assessments of the chance a candidate would win did not change when we excluded the margin of error from the forecasts we presented. People simply did not appear to take that information into account.

Election forecasts affected people’s willingness to vote in a hypothetical election

Our second experiment simulated the trade-offs involved in voting in an election. Participants could decide whether to pay a small fee to “vote” for their team, simulating the real-world costs that voters face, such as the time it takes to vote. If their team won each “election,” they would win money. If their team lost, they would lose money.

We found that in these simulated elections, people voted at lower rates after seeing win-forecasts with a higher chance of one side winning. But seeing more extreme vote share projections did not produce this effect.

That’s consistent with a lot of past research showing that when people think a real-world election is in the bag, they tend to vote at lower rates. In part for this reason, 38 of the 83 countries studied in a 2012 report go so far as to institute blackout periods for polls during election campaigns.

Could Hillary Clinton have been hurt by confident forecasts in 2016?

This, of course, is the most controversial question. Yet there is reason to expect that Democrats may have been more affected by win forecasts in 2016.

First, in our study, people who saw that their candidate’s chance of victory was high gained more confidence than others lost after seeing that their candidate’s chances were low. If that pattern emerged in 2016, win forecasts should have made Clinton supporters more confident that she would win than they made Trump supporters confident that he would lose.

Second, liberals may have been more likely to encounter these forecasts in the first place. Websites that presented forecasts were shared by more liberal Facebook audiences (as you can see in this paper’s replication materials). Only realclearpolitics.com, which doesn’t emphasize the chance of winning, has a conservative audience:

What’s more, the cable outlet with the most liberal audience — MSNBC — was more likely to cover forecasts than other cable news networks.

Indeed, the proportion of Democrats who said they thought one candidate would “win by quite a bit” in 2016 was the highest since 2000.

(Source: American National Election Studies)

But neither our work nor any other research that we know of can demonstrate that these win forecasts caused Democrats to stay home on Election Day in a way that could have affected the final election result.

So what are best practices for presenting election forecasts?

FiveThirtyEight devoted much of its Feb. 12 Politics Podcast to a spirited and sometimes critical discussion of our work. Nate Silver argued that forecasts based on the chance of winning actually give people a better sense of what’s happening than do forecasts based on vote share.

But the evidence doesn’t justify that conclusion. In limited circumstances, people do provide more accurate accounts of the chance a given candidate will win after seeing that quantity in a forecast. But as the graph above shows, they are still far from accurate.

What’s more, as Nate Silver himself recently pointed out, win forecasts are misinterpreted even among journalists who cover polling — and more generally tend to confuse people. I dug deeper into the evidence in a recent piece.

So how the media should cover forecasts going into the 2018 and 2020 elections? Our findings do not necessarily suggest a set of best practices. We do not know, for example, how the explanations and narratives that may accompany forecasts affect our understanding of a race. There needs to be further research on this subject.

Nevertheless, when presenting any forecast, media outlets should take into account the challenges that people have understanding them and the possibility that those forecasts could affect turnout.

Solomon Messing, PhD, is the director of data labs at Pew Research Center.