It seems like every election, some observers suggest that the polls are biased against one party. They’re often wrong. In 2012, many thought the polls were biased against Republicans, but they ended up being biased against Democrats. In 2014, many argued that the polls could be biased against Democrats, but they were biased against Republicans instead. Some years there has been no bias. Faced with these results, the bias seems unpredictable.

But the bias may not be quite so random. Instead, we may be looking at the problem the wrong way. The polls might not biased against one party so much as against the ultimate winner, and in a subtle way that makes this bias a little hard to see.

To look for this anti-winner bias, for each Senate election cycle since 1990, I ran the Election Lab poll averaging process and then calculated how badly the prediction for each Senate race missed the actual vote share for the winner. Negative values represented understatements of the winner’s vote, while positive values were overstatements.

Now here’s the subtle part. What matters most for predicting wins and losses is whether the vote share is biased for races predicted to be close. That’s because in races expected to be uncompetitive, the bias has to be enormous to seriously threaten the predicted winner.

To get at this idea, I estimated a regression model that explained the bias in the prediction as a function of the margin in the polling average. (The margin is calculated as the difference from 50 percent in a two-candidate race, so if you want the difference between the two candidates instead, you have to roughly double all the numbers below.) This makes it possible to estimate the bias for a race expected to be perfectly competitive at 50 percent for each candidate — in this case, it’s just the intercept from the model.

The graph below presents the estimated bias in a 50 percent-50 percent race for each year. In this type of race, the polls are biased against the winner in every single election.

Of course, the anti-winner bias is larger in some years than others, and in some years (1992, 1998, 2004, 2008) we can’t be very confident the bias is real.

But the cumulative chance of seeing a bias in the same direction over so many years is pretty small. And by comparison, the same calculation for a simple partisan (Democratic/Republican) bias bounces around from one year to the next without any clear pattern.

The bias in uncompetitive races is less consistent than for competitive ones. In a few years (1990, 2008, and 2014), the anti-winner bias is similar in both competitive and uncompetitive races. But in other years, the anti-winner bias is smaller in uncompetitive races — and in fact, can flip to a pro-winner bias in the least competitive races.

Where does this bias come from? It’s not entirely clear, but here are two possible explanations.

First, when a race develops a reputation for being close — based on the fundamentals, early polls, opinions by close observers or whatever — it may give partisan pollsters in particular an incentive to keep making it look close. After all, by the last few weeks of the election, a lot of people have invested a huge amount of money and effort in the outcome. You want to keep your supporters energized, and a poll or two that suggests a closer race would be very helpful, especially for the side that is trailing in most polls.

This argument makes a lot of sense, but doesn’t get much support from the data. When explicitly partisan pollsters are removed from all calculations, the results are basically the same.

The other possibility is that all pollsters would rather claim that a race is competitive, since it will draw more attention to their polls. Moreover, the candidates themselves might be less bothered if such a prediction ends up wrong, since both contenders would rather err on the side of taking the race seriously. If we figure pollsters also don’t like bucking the prevailing wisdom of any kind, it might encourage some form of herding toward a competitive prediction.

It’s harder to test this idea, so it has to remain more speculative. And there are certainly examples where the explanation doesn’t work. The polls in this year’s Virginia Senate race actually over-estimated the winner’s vote share, making the race look too safe for Mark Warner. Worse still, at least one pollster confessed to suppressing Virginia results that suggested that the race would be close for fear that he would be attacked. Still, the pattern doesn’t have to hold in every case. It just needs to be right on average.

Regardless of its cause, the notion of an anti-winner bias helps resolve a contradiction: While the polls sometimes have a systematic partisan bias, they are generally very accurate at predicting winners and losers. In fact, despite all the concern about the anti-Republican bias of the polls in 2014, most forecasters really only miscalled a single Senate race (North Carolina). It appears that whatever produces this anti-winner bias, it isn’t large enough in most cases to predict the wrong outcome. Instead, it tilts things toward a perfectly competitive race without flipping it to the other side.