We should care about this because it’s all too easy to launch into a new story for each poll: Hillary’s up because she did this, she’s down because she did that, and so forth. But if we try to explain each roll of the dice, we’re overexplaining — and, ultimately, explaining nothing.
Reuters-Ipsos provides a nifty graphic showing tracking polls:
The poll is of likely general-election voters, the blue line is Clinton support, the red line is Trump, and the yellow is Other/Wouldn’t vote/Refused.
These are a good summary of polls. In aggregate, there’s no reason to distrust them. But before leaping to explain every blip, let’s hear from political scientist Alan Abramowitz, an expert on public opinion and polarization, who saw this graph and wrote:
Take a look at the results from their tracking poll — they are laughable. Huge variations over time although Clinton usually has a substantial lead. For example, on March 9 they had Clinton leading Trump by 15 points, but on March 13 they had her leading by only 4 and on March 20 they had Trump leading by 2. But by March 29, Clinton was back ahead by 15.More recently, on May 4, Clinton was ahead by 13 but on May 10, only six days later, her lead was down to 1 point.Now anyone who knows anything at all about public opinion and voting behavior in the U.S. in recent years would regard these results as totally absurd. The electorate does not swing wildly back and forth over the course of a few days like this. The only reasonable conclusion one can draw from these results is based on averaging them over a couple of weeks at a time. When you do that, what you’ll see is a very stable electorate with Clinton holding a solid lead.It is almost certain that their results will continue to swing wildly back and forth in the future. There is no reason to take this wild variation seriously.
Abramowitz, like me, has studied political polarization. We know that it takes a lot to shift public opinion, especially when pitting two candidates who are as distinct as Clinton and Trump. The noise can be bigger than sampling error, but there’s a lot of non-sampling error in these polls, too. (Sampling error is that plus or minus three or four percentage points that you hear about when a poll is summarized in the newspaper; it’s the variation you might see if the exact same sample were performed on the exact same population at the exact same time. Non-sampling error corresponds to differences in populations of survey respondents at different times, and to fluctuation in opinions from day to day.)
As an aside, the chart is pretty and the dynamic graphics are very well done, but presenting numbers such as “41.3%” is hyper-precision. Even saying “41%” could give a misleading sense of accuracy. But “41.3%”? C’mon. I mean, why don’t you just go whole hog and say 41.32134879234892349723 percent? Why stop at only one decimal place?
Even if we don’t care about the particular numbers being graphed above — I agree with Abramowitz that they’re mostly noise — I still care about graphical presentation, because these are the same tools that will be used to display less noisy data such as trends in the economy or the environment.
Okay, back to the polls.
All election news is taken as partisan, so I could see how you might see this post, which plays down the relevance of the apparent Trump surge, as being anti-Trump or pro-Clinton. But it’s not. Remember the big story from a few months ago — Trump’s lead in the polls, and then every blip, every time he dropped by a few points in some survey, was taken as evidence that he’d finally peaked and was doomed? But it didn’t happen. Those blips were just blips. So I’m saying now what I could’ve said then: Don’t take these blips so seriously.
This is not to say that public opinion is fated to be constant. There could be changes, and indeed the latest blip could represent something real, just as Trump’s big drop in the polls on April 10 (look carefully at the above graph and you’ll see it) could have meant something. It’s just that the latest polls don’t provide much evidence.
The polls are not a random walk.
The error is in what Noah Kaplan, David Park and I have called the “random walk model” of polls.
Here’s how Nate Silver put it a few years ago:
In races with lots of polling, instead, the most robust assumption is usually that polling is essentially a random walk, i.e., that the polls are about equally likely to move toward one or another candidate, regardless of which way they have moved in the past.
I have a lot of respect for Silver, but in this case, as Kaplan, Park and I explain in this article from 2012, Silver was wrong. The polls are better described not by a random walk model but by what we call “mean reversion,” in which they are heading toward a particular place, so that the current state of the polls is not the best predictor of what might come next.
We suspect that the popularity of the random walk model — indeed, its uncritical acceptance by many election observers — arises by analogy to the random walk model of stock prices. The current price of a stock is its value, and to the extent the market is “efficient,” you should not be able to predict whether the price will rise or fall. And, indeed, it can make sense to view betting markets as, approximately, random walks.
But polls are not betting markets. Polls are polls, and there’s lots of evidence that they’re not random walks, and that it is, in general, incorrect to take the most recent poll as a starting point for thinking about public opinion. Instead, as Abramowitz says, we should do some averaging and not think of the latest poll as representing our current state of knowledge.
News organizations have an incentive to present and talk about the latest polls, the more fluctuation the better. Fluctuation in polls is news! But it shouldn’t be.