A combination picture taken on May 8, 2015 shows (L-R) outgoing opposition Labour Party Leader Ed Miliband, outgoing leader of the UK Independence Party Nigel Farage and outgoing leader of the Liberal Democrats Nick Clegg announce their resignations a day after national elections. (AFP PHOTO / JUSTIN TALLISJUSTIN TALLIS/AFP/Getty Images)

On Thursday, what polls suggested would be a close race between Conservatives and Labour in the U.K. turned out to be a Conservative rout. We sat down with Scott Clement of the Post's polling team to try and figure out why the final polls were so far off the mark -- and to discuss the theories some other knowledgeable people are putting forward.

(This has been edited and organized for clarity.)


FIX: [FiveThirtyEight's Nate] Silver's been talking for a while about the declining use of landline phones and how that's affecting poll results. Last night he did a quick piece on how he sees this is as being part of the problem in poll trends over the long term. [Note: This vastly oversimplifies Silver's more-comprehensive point about what happened in the U.K.]

CLEMENT: The problem with response rates has been going on for a long time. Nate's right to point out that this certainly increases the risk of severe polling errors -- Landlines have caused problems in polling -- but this ignores the fact that the vast majority of major national surveys in the U.S. and in Britain that are conducted over the telephone are incorporating mobile phones.

In fact, the percentage of people that can be contacted by cellular phones is extraordinarily high in the U.S. The only problem is that it just costs more and the response rates aren't any better on mobile phones than landlines -- if anything a little bit less.

FIX: It costs more because you get fewer contacts?

CLEMENT: Exactly. With cell phones, there are a variety of people you can reach that are ineligible for the survey. They can be people that are too young -- people under the age of 18, for instance. The other is that, because of legal reasons, pollsters have to hand-dial cellphone numbers in the United States, as opposed to having an automated dialer that feeds into a live interviewer.

But look: The response rate issues aren't going away, and they exist, interestingly, both for phone polls and for web surveys. So even if you have a panel of web respondents that you've built up, your response rate to an individual survey might be fairly terrible. So those issues of who's going to participate and how that will affect your results? Those aren't going away.

The biggest source of skepticism for me for that argument is that larger-scale studies that have looked at overall response rates and their impact on accuracy find almost no correlation. That's striking. You would think that if I had twice as large a response rate -- if I had a 50 percent response rate instead of 25 -- I would have a better estimate of what's happening in the population. That's not necessarily true.

FIX: Let's talk about Thursday. Walk me through what you see from the numbers in terms of how far off it was. What's your theory?

CLEMENT: I don't have a strong theory. What is striking is that the polls were very consistent across different methodologies and across different sample sizes. As some folks at the U.K. Polling Report said, we're not dealing with issues of sampling error here. This is not an issue of random variability going wrong. This is an issue of wholesale bias across a number of different methodologies -- which is especially worrisome, because it doesn't tell you what to fix.

The differences were not gigantic. The average of the last polls missed the Conservative vote share by three percentage points. That's not a lot in an election. They missed the Labor share by three percentage points. That's not huge, but when in opposite directions, it causes you to miss the entire story.

What strikes me here is, you have so many methodologies, it brings up two things.

Either (1) that biases and response rates were going on across all of these and none of them were able to adjust for it and fix it.

The other -- and this is very much along Silver and Harry Enten's, his colleague, line of thinking -- is that pollsters are looking at each other and are worried about being outliers. Many of these pollsters are professional and certainly spending a lot of resources to measure these things. I don't suspect a lot of intentional moving of the needle there. But I think the theory of herding is that people tend to see something in their results when it's particularly out of line with others, and maybe they're more likely to notice something they should adjust for to move it back and forth.

The one other thing that's important for errors is that, in elections like this with multiple parties and coalitions, you can have what's called strategic voting, where voters that may support one party could end up changing their support if they think that voting for their second-favorite party will give them better representation if they win. The logic of strategic voting is complicated, but we see it in American primaries, where supporters of a long-shot candidate switch to one who appears more viable and likely to win, even if that's not their favorite candidate.

FIX: We live in an era in which polls get a lot more attention. Do you think that has influenced people to be more worried about being outliers? I remember the example from Iowa last year where the Des Moines Register poll showed Ernst up like 10 and everyone thought it was an outlier. The Register's name was on the poll moreso than the pollster (Selzer and Co.). Do you think that plays any role?

CLEMENT: The connection between pollsters' reputation and their election estimates goes back all the way to the start of scientific polling and George Gallup. The idea that people are worried about their reputations when it comes to polling and elections is nothing new.

There's greater scrutiny now that there are so many polls and outliers stick out like a sore thumb. We haven't seen something like 1948 "Dewey Defeats Truman," but people are now scrutinizing these small differences. Sometimes they don't matter. When you have polls that are just 1,000 sample size and you miss the margin by a couple of points, you should expect that.

The level of aggregation, the level of focus on pin-pointing the results today and knowing what's going to happen -- it's a mixed bag for pollsters in terms of improving their methods. On the one hand, there are a lot more people watching, so they're going to be on their best methods and double-check everything they do. On the other hand, there's not as much tolerance for having random error exist as it does.

That's part of what's driving this: a desire for greater precision. but we're also pushing up against some of the limits of a method.

FIX: To what extent are you, as a professional, worried about looking at what's going to happen in 2016?

CLEMENT: The lessons from the British election are not clear yet. Learning from them is difficult at this point. A healthy skepticism of the accuracy of surveys, even when they agree with each other in an election contest, I think is probably the first lesson. The expectation of surveys to really be nailing a result in a significant way is big.

The biggest thing is that you must be constantly scrutinizing what you're doing, looking for potential problems in your survey, through non-response, through weighting techniques, through the way you're drawing your samples. Looking for the things that are systematically biasing your results.

This came up in 2012 with Gallup surveys coming under a lot of scrutiny prior to the election. Many people pointed out that there were some apparent problems with the way they were weighting by race or measuring race. After the election, they concluded that was part of the problem. Their experience pointed to a lot of the challenges in the United States in polling. It was a very helpful self-examination they did.

Those are learning moments for the polling community. We're looking forward to seeing what comes out of the British cycle to learn from that. Is there a bias in our surveys that we're not noticing that the traditional methods are not catching and is now failing us? If you're keeping an eye out for what those are and updating your methodology to account for that, you're less likely to have systematic errors.

But nothing can take away that risk! These methods are never going to be as precise as we want them to be.