From a “methods” point of view, the key step is to poststratify by party ID, an idea that I’d explored before (with Cavan Reilly) but without realizing the full political implications.
Here’s another way of looking at it: We have a panel survey so we can see how often people were changing their opinion during that critical period of the campaign. Check it out:
(Sorry about that graph where the axis goes below zero. I don’t know how I let that one through.)
This is a big deal and it represents a major change in my thinking compared to my 1993 paper with Gary King, “Why are American Presidential election campaign polls so variable when votes are so predictable?” At that time, we gave an explanation for changes in opinion, but in retrospect, now I’m thinking that many of these apparent swings were really just differential nonresponse. Funny that we never thought of that.
David, Sharad, Doug, and I came to our conclusion after a fairly elaborate analysis of a new dataset. But the idea was out there. Here was Mark Palko, writing on Nov. 6, 2012, just before the election returns were coming in:
Assume that there’s an alternate world called Earth 49-49. This world is identical to ours in all but one respect: for almost all of the presidential campaign, 49% of the voters support Obama and 49% support Romney. There has been virtually no shift in who plans to vote for whom.Despite this, all of the people on 49-49 believe that they’re on our world, where large segments of the voters are shifting their support from Romney to Obama then from Obama to Romney. . . .In 49-49, the Romney campaign hit a stretch of embarrassing news coverage while Obama was having, in general, a very good run. With a couple of exceptions, the stories were trivial, certainly not the sort of thing that would cause someone to jump the substantial ideological divide between the two candidates so, none of Romney’s supporters shifted to Obama or to undecided. Many did, however, feel less and less like talking to pollsters. So Romney’s numbers started to go down which only made his supporters more depressed and reluctant to talk about their choice. . . .This reluctance was already just starting to fade when the first debate came along. . . . after weeks of bad news and declining polls, the effect on the Republican base of getting what looked very much like the debate they’d hoped for was cathartic. Romney supporters who had been avoiding pollsters suddenly couldn’t wait to take the calls. . . . The polls shifted in Romney’s favor even though, had the election been held the week after the debate, the result would have been the same as it would have been had the election been held two weeks before . . .
I think Palko was basically right (although I’d change his 49-49 to something more like 51-49), and he gets extra credit for figuring this out without having the panel data to show it. If all the major pollsters had been poststratifying by party ID, though, maybe it would’ve been clearer.
Let me conclude with a statistical point. Sometimes researchers want to play it safe by using traditional methods — most notoriously, in that recent note by Michael Link, president of the American Association of Public Opinion Research, arguing against non-probability sampling on the (unsupported) grounds that such methods have “little grounding in theory.” But in the real world of statistics, there’s no such thing as a completely safe method. Adjusting for party ID might seem like a bold and risky move, but, based on the above research, it could well be riskier to not adjust.