5 Tips for Decoding Those Election Polls
As the 2008 campaign roars into Iowa and New Hampshire, anyone following the polls is probably finding it increasingly difficult to separate signal from noise. So here's a brief user's guide to the coming bounty of data.
1. Throttle back on the horse race.
Sure, keeping track of the score is fun. But like caramel-coated popcorn, it's addictive rather than truly nourishing. Odd as this advice may sound coming from two pollsters, ease up. Polls are better used not merely to tell us who's winning but why.
What issues motivate voters? Which policy proposals and candidate characteristics seize their imaginations? What are the key divisions among groups of voters? By answering these questions, good polls help us see the underlying dynamics of the election -- not just the bare numbers but citizens' real concerns.
The horse race is not just less substantive, it's also not predictive. Plenty of likely voters in Iowa, New Hampshire and elsewhere are still reserving the right to change their minds. And polls have real limits here: The who's-up, who's-down numbers are imperfect estimates, often prone to more volatility than just about anything else we measure.
Why? For one thing, there's the interplay of voters' changing minds and campaigns' tactics. For another, even high-quality polls use different models to work out who's a "likely" voter. Widely varied estimates of the number of "undecided" voters are more often a function of polling techniques than of true indecision. And focusing on the gap between candidates, rather than on each one's level of support, is a sure way to exaggerate small differences in polls.
If you really need to sweat out whether Sen. Hillary Rodham Clinton is at 28 percent or 34 percent in Iowa (where there'll be fewer caucusgoers than there are seats at the Indianapolis Motor Speedway), have at it. Personally, we'd rather know whether voters' top priority is change or experience, how the economy stacks up as an issue or how religion is shaping the Republican race. Some polls, sadly, don't even bother to ask.
2. Consider the source.
Polls too often get a bye on journalism's central tenet: Consider the source. Anything else that flies in over the transom gets checked out before we accept it as real, but numbers are often somehow too compelling. They elevate anecdote; they lend authority and credibility to what's otherwise anybody's guess. We need 'em. We want 'em. And we run with 'em -- all too often without stopping to check.
In reality, there are good polls and bad, reliable methods and unreliable ones. To meet reasonable news standards, a poll should be based on a representative, random sample of respondents; "probability sampling" is a fundamental requirement of inferential statistics, the foundation on which survey research is built. Surrender to "convenience" or self-selected samples of the sort that so many people click on the Internet, and you're quickly afloat in a sea of voodoo data.
Probability sampling has its own challenges, of course. Many telephone surveys are conducted using techniques that range from the minimally acceptable to the dreadful. When it's all just numbers, these, too, get tossed into the mix, like turpentine in the salad dressing.
A few guidelines: Publicly released partisan polls consistently overstate their side's support; steer clear. Polls churned out like so many assembly-line widgets often lack the rigor that reliable research demands. Be wary of automated "robo-polls" and unrepresentative Internet click-ins. Look for telephone surveys produced by known and credible sources that offer a detailed disclosure of their methodology, questionnaires and results.