Hopes on the left for a blue wave that would hand the Senate to Democrats evaporated last week. President Trump outperformed the pre-election polling numbers once again, but the larger story, at least where the validity of polls is concerned, is that Republican Senate candidates outperformed polls to a much greater degree than the man at the top of the ticket.

In the 16 key races that have been decided, Republican Senate candidates attracted an average of seven percentage points more support than the pre-election polling suggested. That makes this year a major historical outlier: In presidential election years since 1996, the average error in Senate races has been no more than four points in either direction. The misses are especially surprising because polls did well in 2018 and 2019 elections.

Sen. Lindsey O. Graham (S.C.), for example, secured 54.5 percent of the vote, compared with 44 percent for his Democratic rival Jaime Harrison. Polls the week before the race had put Graham’s support at 46 percent. In Montana, Sen. Steve Daines (R) won 55 percent of the vote, compared with 45 percent for Steven Bullock — whereas pre-election polls showed Daines with a mere one percentage point lead.

Polls gave a mistaken impression of how close other key races were, too. In Iowa, in the last week of pre-election polls, Theresa Greenfield (D) led Sen Joni Ernst (R) by a median of one percentage point, yet Ernst won by more than six. In Maine, where ranked-choice voting helps challengers, Sara Gideon had a lead of 2.5 percentage points over Sen. Susan Collins (R.), yet Collins ended up four points ahead of her combined rivals.

The GOP swept all races where the margin was within three points in either direction. Control of the Senate now hinges on two races in Georgia, which go to runoff elections on Jan. 5.

Getting Senate polls right is not just a matter of punditry. Understanding the dynamics of close races shapes resource allocation by campaigns and activists. Democratic donors poured millions of dollars into Kentucky and South Carolina, for example, whereas it’s clear money and effort spent in North Carolina and Georgia would have made a bigger difference.

For context, Trump outperformed the polls by 2.6 percentage points in states that President-elect Joe Biden won, and by 6.4 percentage points in states where he himself won. (Presidential candidates tend to outperform the polls in states they win by large margins, perhaps in part because the opposition foresees the result, which damps turnout.) Nationally, pollsters are still debating the degree Trump’s strength was undermeasured, but it appears to be on the order of three percentage points.

Calculations by my poll-aggregation organization, the Princeton Election Consortium, reveal a possible hidden reservoir of support that Republican Senate candidates enjoyed, even relative to Trump: In the seven closest races where the counts are complete, they overperformed polls by three to 11 points. Their average vote share was 52.1 percent — almost exactly the same as Trump’s vote, 52.3 percent.

Notably, there was little polling error for Democratic candidates. In the same seven races, Democrats did 0.7 points better than their final polling numbers, and matched Biden’s numbers closely. In short, pollsters accurately reported Democratic preferences for Senate, but not Republican preferences.

People sometimes speak of “shy” Trump voters: people who are unwilling to admit that they support him, because of social disapproval or other reasons. But setting Trump aside, it seems highly implausible that a “shy Steve Daines” effect exists. Instead, these results are consistent with the hypothesis that supposedly undecided Senate-race voters in fact have a hidden preference for the party of their preferred presidential candidate. (Undecided voters in Senate races constituted about five percent of the electorate this time.)

Why would a voter be able to express Trump support but have difficulty responding about the Senate candidate?

One explanation may have to do with the fact that most voters don’t obsess over politics as much as analysts. We’ve all been living in The Trump Show for the past four years; it would seem plausible that some Republican voters who formed opinions about the top of the ticket were not paying close attention to Senate races until it was time to vote.

It’s also the case that “undecided” is a misleading name for the voters who fall under that rubric. From a cognitive standpoint, few of us are truly undecided. In voting, as in other domains of life, we can’t always verbalize what we want — even if we have clear leanings in one direction or another. There’s a fair bit of social science research demonstrating “undecided” voters in fact have preferences that can be inferred by asking more questions.

A few more follow-up questions could identify undecideds’ unstated preference: for instance, “Who do you support for President?” or “Are you favorable/unfavorable toward President Trump [or former vice president Biden]?” (With modern levels of partisan polarization, down-ticket loyalty is extreme: In 2016, every Senate race was won by the same party as the presidential winner.) The answers to those questions will predict down-ticket choices. Pollsters can also ask questions that reveal opinions about social trust: Some research suggests people who rate high on that quality are more likely to engage with pollsters — and also more likely to be Democrats. Asking those questions, therefore, may reveal whether Republicans are underrepresented in a survey.

News coverage should also adapt to this understanding. Poll aggregators emphasize the margin between the two leading candidates, which creates a misleading picture of either candidate’s hidden potential. Instead, they can foreground whether either candidate is approaching the 50 percent threshold. In South Carolina, Harrison was tied with Graham at 46 percent in pre-election polls. That means that a lot of the vote was still in play, which should have caused observers to be more cautious

After this year’s errors, many people once again suggesting polls are worthless. They aren’t: They remain our best tool for understanding public sentiment in the aggregate. But there’s a lot of room for improvement. The mistakes in the Senate race reveal one place where pollsters need to do more work.