But chances are there was little to no change at all in public sentiment about the upcoming election. The shift in the poll results coincided with a shift in pollster methodology — from assessing the opinion of registered voters to assessing opinion among that portion of the registered electorate who the polling organization deemed to be a likely voter. Although Pew and CBS/NYT emphasize that the sampling frame changed, this might not be clear to the casual observer of polls.
In fact, underlying opinion appears stable from late August to early September. Pew reported the same 47 percent to 42 percent Democratic lead among registered voters at both points in time. The Republican lead in September was due to the change in whose votes counted, those only of “likely” voters. Pollsters count as likely voters those who pass their screen regarding interest, enthusiasm and motivation to vote in the upcoming election.
By the final week of a campaign, likely voter polls usually are more accurate than polls that count the preferences of all registered voters. By election eve, the pollsters’ likely voter screen is useful for determining who votes. But does it work early in the campaign, weeks before the election?
“Registered voter” polls can mislead as predictors of Election Day outcomes since even among the registered, prospective Democrats vote less often than Republicans. The differential in turnout between registered Democrats and registered Republicans is not necessarily a constant, however; the partisan disparity can vary with the two partisan groups’ relative enthusiasm and interest in the current election. Likely voter polls are designed to adjust for this enthusiasm differential.
But when applied early in the campaign, likely voter polls can exaggerate how much difference it makes for an election weeks ahead. Moreover, early likely voter polls are more erratic because counting fewer respondents leads to more sampling error. And when the relative enthusiasm of prospective Democratic and Republican voters oscillate over time, early likely voter polls make it seem like underlying voter preferences change more than they really do. (Also see this recent blog post.)
What’s the consumer of polls to think given pollsters’ early launch in 2014 of their likely voter models? We know that neither registered nor likely voter polls give us the true state of play. It is important to be aware of the differences between the two, and it is helpful when polling organizations report results for both populations, not just those for likely voters. It is also helpful when (as Gallup has been in the past) pollsters are transparent about their likely voter methodologies.
There is no magical fix for determining who will vote and who will not when Election Day is in the distant future. But it probably is safe to consider both likely and registered voter reports, hoping that the truth lies somewhere in between.
(CORRECTION: The original post included some misleading inferences about the demographic distribution of likely voters in a recent Pew poll. We incorrectly interpreted Pew’s reported sample sizes for subcategories as weighted rather than unweighted, Using weighted sample sizes, Pew’s breakdowns by age and by race are similar to Pew’s breakdowns for 2010 likely voters. Our inference that Pew classified an unusual number of whites and older people as likely voters is unwarranted. We regret the error very much. We have also added clarification that neither Pew nor CBS/NYT claimed that a shift in opinion had occurred when they shifted from a registered voter sample to a likely voter sample.)
Robert S. Erikson is a professor of political science at Columbia University and Christopher Wlezien is the Hogg Professor of Government at the University of Texas at Austin. They are coauthors of “The Timeline of Presidential Elections: How Campaigns do (and do not) Matter” (University of Chicago Press, 2012) and a follow-up e-book on “The 2012 Campaign and the Timeline of Presidential Elections” (University of Chicago Press, 2014).