Heading into Election Day, virtually all public polling — at the national and swing-state level — pointed to a relatively easy victory for Democrat Hillary Clinton. That, um, didn't happen. In search of the “why” behind that poll fail, I contacted my longtime friend Jon Cohen. Jon was once the head of polling at The Washington Post, but in his new life he serves as the senior vice president of Survey Monkey, the leading purveyor of Internet-based polls. My conversation with Jon, conducted via email and lightly edited, is below.
FIX: This is, at least, the second straight presidential election where polls missed the mark — this one by a wide margin. Is political polling broken? Why or why not?
Cohen: When looking at the averages, the magnitude of the 2016 national polling miss (roughly three percentage points on the popular vote) is almost identical to that from 2012. The enormous difference here is that the error had virtually all the polls pointing to the wrong outcome.
That’s not to minimize what happened Tuesday. A final analysis may point to the cataclysmic breakdown of polling and modeling that many in the industry have feared; we just don’t know yet.
FIX: At the heart of these problems, is it simply a sampling issue? As in, do we just not know how to predict what an electorate will look like?
Cohen: A central part of the mandatory investigation into the performance of polling and modeling in 2016 will be a clear-eyed look at sampling, weighting and, crucially, likely voter estimation.
It’s going to take more than a few days, weeks, or even months to sort out, conclusively, the roles sampling and weighting may have played in polling errors. However, one thing is clear: How pollsters determine who is actually going to vote is broken, regardless of the various approaches taken by public pollsters and the campaigns themselves. At the end of the day, tens of thousands, hundreds of thousands, or millions of people tagged as likely to vote didn’t bother to do so; and perhaps some deemed unlikely to get to the polls did vote.
The American Association for Public Opinion Research (AAPOR) grew in the shadow of the 1948 “Dewey Defeats Truman” debacle, and the organization has a task force in place to get to the bottom of what happened this year.
Public opinion work is too important to our democracy and too important to understanding ourselves and our society to fade away. I’m confident we will get this right.
FIX: Do we do too much political polling?
Cohen: Quality, not quantity, is at issue here. That needs to be our focus.
FIX: Is it time to overhaul our traditional live-caller phone polls given these problems? Could Internet-based polling have gotten this more right?
Cohen: Without a doubt, Internet polls are the future — the challenge is to build industry consensus around how to do this well. The overhaul is already underway in that telephone polls are now conducted very differently than they were even a decade ago. For decades, there was a useful consensus around the methodological underpinnings about what made a quality survey. We know the building blocks of its replacement: scale, diversity, controllable self-selection, methodological rigor, and transparency — but we need more. We’re not there yet: Internet polls, as well as live and automated polls, appear to have had similar errors this year, as they did in 2012.
Now, one note on our SurveyMonkey state polling this year: Trump won the presidency by winning four states by a percentage point or less (Florida, Michigan, Pennsylvania and Wisconsin). Three of those four states were “toss-ups” in our final 50-state map. We got two states wrong; the rest was what seemed to be a long-shot Trump sweep in toss-up states.
FIX: Finish this sentence: “The state of polling on November 9, 2016, is ______________.” Now, explain.
Cohen: “Unsettled . . . and facing a moment of reckoning.”
Whatever the cause, there is a crisis of confidence in the work we do. We all have a responsibility to make sense of our data, get to the bottom of what went wrong and share what we learn.