5 Tips for Decoding Those Election Polls

By Gary Langer and Jon Cohen
Sunday, December 30, 2007

As the 2008 campaign roars into Iowa and New Hampshire, anyone following the polls is probably finding it increasingly difficult to separate signal from noise. So here's a brief user's guide to the coming bounty of data.

1. Throttle back on the horse race.

Sure, keeping track of the score is fun. But like caramel-coated popcorn, it's addictive rather than truly nourishing. Odd as this advice may sound coming from two pollsters, ease up. Polls are better used not merely to tell us who's winning but why.

What issues motivate voters? Which policy proposals and candidate characteristics seize their imaginations? What are the key divisions among groups of voters? By answering these questions, good polls help us see the underlying dynamics of the election -- not just the bare numbers but citizens' real concerns.

The horse race is not just less substantive, it's also not predictive. Plenty of likely voters in Iowa, New Hampshire and elsewhere are still reserving the right to change their minds. And polls have real limits here: The who's-up, who's-down numbers are imperfect estimates, often prone to more volatility than just about anything else we measure.

Why? For one thing, there's the interplay of voters' changing minds and campaigns' tactics. For another, even high-quality polls use different models to work out who's a "likely" voter. Widely varied estimates of the number of "undecided" voters are more often a function of polling techniques than of true indecision. And focusing on the gap between candidates, rather than on each one's level of support, is a sure way to exaggerate small differences in polls.

If you really need to sweat out whether Sen. Hillary Rodham Clinton is at 28 percent or 34 percent in Iowa (where there'll be fewer caucusgoers than there are seats at the Indianapolis Motor Speedway), have at it. Personally, we'd rather know whether voters' top priority is change or experience, how the economy stacks up as an issue or how religion is shaping the Republican race. Some polls, sadly, don't even bother to ask.

2. Consider the source.

Polls too often get a bye on journalism's central tenet: Consider the source. Anything else that flies in over the transom gets checked out before we accept it as real, but numbers are often somehow too compelling. They elevate anecdote; they lend authority and credibility to what's otherwise anybody's guess. We need 'em. We want 'em. And we run with 'em -- all too often without stopping to check.

In reality, there are good polls and bad, reliable methods and unreliable ones. To meet reasonable news standards, a poll should be based on a representative, random sample of respondents; "probability sampling" is a fundamental requirement of inferential statistics, the foundation on which survey research is built. Surrender to "convenience" or self-selected samples of the sort that so many people click on the Internet, and you're quickly afloat in a sea of voodoo data.

Probability sampling has its own challenges, of course. Many telephone surveys are conducted using techniques that range from the minimally acceptable to the dreadful. When it's all just numbers, these, too, get tossed into the mix, like turpentine in the salad dressing.

A few guidelines: Publicly released partisan polls consistently overstate their side's support; steer clear. Polls churned out like so many assembly-line widgets often lack the rigor that reliable research demands. Be wary of automated "robo-polls" and unrepresentative Internet click-ins. Look for telephone surveys produced by known and credible sources that offer a detailed disclosure of their methodology, questionnaires and results.

If you have the time to drill down, do. Look for biased questions and cherry-picked or hyped analyses. Watch for big headlines about small differences and reckless analysis of small subgroups.

3. Watch for consistent change and a meaningful narrative.

Change over time is important, especially when it's consistent, with a clear narrative of what's happening and why. Knowing, say, former Arkansas governor Mike Huckabee's current level of support in Iowa is woefully incomplete without also knowing its trajectory over time -- from 8 percent in our July poll to 24 percent in November to 35 percent in December.

"Trend is your friend," pollsters say. Look at repeat polls from the same organization to gauge movement over time. And, again, look beyond the horse race to other measures: the levels of commitment and enthusiasm from a candidate's supporters, the groups that are more or less fired up, the factors motivating their support. It takes willpower to trudge off to an hours-long Iowa caucus on a dark winter's night. Who's inspired? How? Why?

Look at preferences on the issues; examine views of candidates' various attributes; consider differences among voter groups. The evolution of these views puts the story in context -- elevating mere numbers into something of greater value. Let's call it intelligence.

4. Don't be seduced by averages.

Poll averages are all the rage this year, even ones that purport to show candidates' standings measured down to tenths of a percentage point. (Don't get us started.) This may fill political junkies' seemingly insatiable desire for a minute-by-minute assessment, as if this were the Nasdaq average or our rich Uncle Leo's EKG. In fact, there's a lot less to these averages than meets the eye.

Averaging across polls with different methodologies can easily obscure rather than clarify. If you take a state with few polls -- one good-quality survey, say, and three methodological clunkers -- averaging may well do more harm than good. Averaging polls done across different time periods, with different sampling methodologies, different procedures to estimate "likely voters" (some reasonable, some not) and different numbers of alleged "undecideds" all assumes that these differences make no difference. With this approach, you might as well throw a little Ouija in as well.

The reality is that a good poll is a good estimate. All else being equal (and it never is), a collection of good polls will be an even better estimate, but a collection of good and bad polls won't.

5. Be skeptical of post-election scorecards.

As surely as night follows day, you can bet that some lucky pollster will spring up after each caucus or primary with a chest-beating announcement about how his or her estimate was the most accurate. Chill. Good technique matters, but post-election assertions that this or that poll was the "most accurate" are, by and large, hokum. Polls tend to converge as Election Day approaches, but their paths to the endpoint vary dramatically. Some pollsters weight their data to previous turnout, some build in high numbers of undecideds and then arbitrarily allocate them to one side or another, others do Lord-knows-what. Sometimes it works, and a career is born.

At the end of the day, we're on solid ground judging polls by their inputs, not their outputs; a lucky guess is not the same thing as a high-quality survey. It takes rigorous, proven sampling methods, well-crafted questions and intelligent analysis to produce a valid, reliable and meaningful understanding of an election. A good estimate will bring us within a few points of the final outcome, but pinpoint accuracy in pre-election polling is a myth. And for a few of us, at least, the aim of the enterprise is not simply to win the horse-race lottery.

One last suggestion: Relax, unless you're one of the candidates. We'll know what the voters decided soon enough -- and, with the help of good polling, we'll even know why.

newspolls@abc.com, polls@washpost.com

Gary Langer is director of polling at ABC News.

Jon Cohen is director of polling at The Washington Post.

View all comments that have been posted about this article.

© 2007 The Washington Post Company