We received a few reactions to yesterday’s post on the mythical swing voter:
1. In comments, Yphtach Lelkes points to this 2004 paper by Robert Erikson, Costas Panagopoulos and Christopher Wlezien on problems with likely-voter screens and the way in which such screens can exaggerate opinion swings during a campaign. The argument of Erikson et al. is not the same as ours — in particular, the “mythical swings” discussed in our paper occur even in the absence of likely-voter screening — but we agree that the two papers are related.
2. In email, Corwin Smidt writes about some technical issues regarding adjustment during periods where party ID is itself varying:
I [Smidt] don’t disagree with the findings in regards to vote intention swings, but I would encourage you and your co-authors to provide further elaboration and details of your recommendation that even non-panel polls should weight on Party ID.
I’ve attached my recently published findings of Party ID dynamics during presidential campaigns in POQ (I cite you frequently). I sympathize with the weight on Party ID argument, but I think to do it in an appropriate way you need to evaluate estimates across all the potential models of PID dynamics (flat, trend, structural break, …), calculate posterior probability of each one’s applicability and the partisan landscape it estimates, and then estimate % by summarizing across those different possible worlds. At that point, I think the uncertainty gets too high for national pollsters to say anything meaningful (at least in RDD world; YouGov may be less but they have additional model-based uncertainties).
I guess you could argue my published findings are a function of non-response, one reviewer mentioned that possibility and I don’t doubt they contribute. But I don’t think the patterns and the size of the swings are entirely explained by a non-response story. There is also clear evidence from the 1980 NES campaign panel data that Party ID exhibits long-lasting changes during campaigns.
The challenge then is that if pollsters weight on Party ID, they have a real need to account for Party ID changes within the weights they choose. Simply using a trend or 2-week moving average that potentially underestimate swings can have as many pitfalls as overestimating swings (normatively speaking) when we consider how journalists cover polls and momentum and how that influences voter behavior.
3. In a post on The Fix, another politics blog at the Washington Post, Aaron Blake agrees with our theory that vote swings in the final month or so of the 2012 presidential campaign were “as much of lack of poll response from supporters of the slumping candidate as they are about voters changing their minds” but he thinks we go too far by suggesting in our title that “swing voters are a myth.”
Blake makes the good point that public opinion should be much more stable and much more predictable by party identification in the final stages of a general election for president than in other, less highly-publicized and partisan contexts. Along with this, we agree with Blake that party ID predicts presidential vote better now than it used to. So our characterization of reported vote swings as “mythical” is hardly universal: large swings in public opinion and voting do occur in other settings.
Blake also expresses some doubts about our methodology. He writes, “While the unrepresentative sample is one thing, the opt-in nature of the survey is perhaps more likely to skew the results.” Indeed that’s possible but we do discuss this issue on pages 16-17 of our article and we find no evidence that other sorts of polling would lead to different conclusions. In particular, we obtained similar results from the CBS News/YouGov panel survey and from the cross-sectional Pew data. John Sides and Lynn Vavreck also discuss this a bit in their book “The Gamble.”
Indeed, in his next paragraph, Blake writes, “This study does suggest that the number of swing voters is far smaller than a lot of people might think. That’s true. But other polls (including those mentioned above) already made pretty clear that was the case when it came to the 2012 presidential race.” That’s right. If analyze the polls carefully you can figure out what’s going on, but if you look at raw poll numbers you can get misled. As Peter Kellner of YouGov wrote on 23 Oct 2012, “There are two versions of what has happened in the past three weeks in the battle to be US President. . . . Version one says that the first television debate between Barack Obama and Mitt Romney was a game-changer. If we average the polls conducted by Gallup, Pew, Ipsos, ARG and the Daily Kos, we find that before the debate, Obama was ahead by four points; afterwards Romney led by four – a shift in the lead of eight points. Before the debate, Obama was heading for a clear victory; afterwards, Romney looked the more likely winner. . . . Version two says that the first debate made only a small difference.” In our paper, we provide further evidence for version two.
As we write, “What we have found is consistent with the growing literature on partisan polarization, but it goes against the general attitude among journalists and political scientists that poll swings represent real changes in candidate preference.” To the extent that journalists start to write that our findings are “already pretty clear,” we’ve already made progress.
And, once we accept the value of adjusting for party ID, lots of work remains in the implementation of the method, as demonstrated by Corwin Smidt in his above-linked paper. And we agree with Aaron Blake that real swings in public opinion do occur, indeed we anticipate that our statistical methods should help to better identify them.