Like one of Pompeii’s more unfortunate residents, the Web page where the New York Times estimated that Hillary Clinton had an 85 percent chance of winning the 2016 election can still be visited. The number is frozen in place, stamped as having been updated at 10:20 p.m. Eastern on election night — after polls closed in Michigan, Pennsylvania and Wisconsin, the states that handed Donald Trump the presidency.

For some time, I’ve been curious about whether there was a causality between those two things. It seems clear that some number of people saw forecasts from the Times and FiveThirtyEight suggesting that Clinton had a high probability of winning and assumed, with some justification, that she would. Did that make them less likely to vote? And did that, in turn, contribute to Trump’s skin-of-his-teeth wins in the upper Midwest?

Did that Web page give Trump the presidency?

A new study from researchers at Dartmouth College, George Washington University and the University of Pennsylvania doesn’t offer a definitive confirmation of that theory, but it does suggest that probability-based models of the outcome of the election — that is, models that gave Clinton a certain percentage chance of winning — may have led to an overestimation of her actual chances and may, in turn, have prompted some people to reprioritize getting to the polls.

Sean J. Westwood, an assistant professor of government at Dartmouth, studies American elections and political behavior. He’s one of the authors of the study, and he spoke to The Washington Post by phone Friday.

The research demonstrates a correlation between confidence in a candidate’s chances and a lower likelihood of casting a ballot by looking at data from the American National Election Studies project. There was about a three-percentage-point effect, Westwood said, meaning those who saw that a candidate was expected to win by a wide margin were about three percentage points less likely to vote in the election.

That was particularly the case among Democrats in 2016, Westwood said, primarily because more Democrats believed that their candidate would win by a wide margin.

This finding by itself doesn’t show that tools reporting the probability of a candidate’s victory spurred people to think Clinton would win easily. For that, Westwood and his team conducted two experiments.

In one, they presented information about a hypothetical election and asked people to estimate what the outcome of the race might be. Some people got information about predicted vote margins. Some got margins combined with margins of error. And some got probabilities.

“The thing that stood out to me,” Westwood said, “is that given that probability-only condition — so I’m just saying that Candidate A is likely to win by 80 percent — when we asked respondents what they thought the margin of victory would be, they reported that it would be 80 percent. So they’re saying that, in their minds, they’re not separating the difference between a probability and the actual vote share.”

In the context of that Times tool, then: Some people probably thought that Clinton would win 85 percent of the vote.

That’s obviously inaccurate, at least to close observers of elections. That people are not good at probability, though, suggests that probability should not be the only metric with which they are presented.

But that’s what many outlets did in 2016, according to Westwood’s analysis.

“If you look at media coverage, in large part on television news, at least we’ll see reporters or pundits saying something like, ‘Nate Silver reports 80 percent chance of Hillary Clinton winning’ — without providing any kind of vote-share context,” he said.

In the other experiment, the researchers modeled a sample election. Participants were given the ability to earn money if their chosen candidate won, but it cost money to cast a vote. Those who thought their candidate was likely to win conserved their money — a proxy for the time and effort of voting — out of confidence that the payoff would result anyway.

Westwood summarized the challenge undergirding probabilistic models.

“Humans don’t think about probabilities very efficiently,” he said. “They tend to make erroneous assumptions. It appears to be the case across a variety of social science research that that is not contingent on education, that’s not contingent on training. It seems to be an innate state of human nature. So I’m not convinced that there is a way to provide additional context or additional information that would allow individuals to correctly process probabilistic forecasts.”

Asked whether probabilistic models had affected the results in 2016, Westwood indicated that no such causation could be proved.

“We can’t actually show whether or not this had an effect on 2016,” he said. “We can show from our experiment that the effect sizes were large and that, if we look at 2016, our work probably does indicate that probabilistic forecasts had some effect on people’s decision to vote.”

“I think it is entirely possible,” he added, “that probabilistic forecasts and the certainty that they provide could have swung states to Donald Trump,” given the narrow margins of victory.

Asked the percentage of likelihood that this happened, Westwood demurred.