DISPATCH FROM A POLLSTER
In Sea of Data, Not All Numbers Are Equal
Tuesday, November 14, 2006
When I called my grandmother on Election Day, she said, "I hate pollsters." She's probably not alone as we collectively recover from Nov. 7 and its hysterical run-up.
People paying attention to this election -- and even those trying not to -- got their fill of polls. And in many cases, the numbers unhelpfully painted starkly different pictures of this year's campaigns.
"Poll Puts Maryland Democrats in the Lead," The Washington Post reported in late October on the state's races for governor and U.S. senator, giving the Democratic candidates sizable advantages. Only hours before that poll came out, a slew of other polls showed the two campaigns much closer, leading the respected Cook Political Report to add the Senate race to its list of tossups.
Both in Maryland and across the country, candidates trailing in one poll could point to another poll in which the race was "too close to call" or even had the opposite estimate. One can hardly blame a trailing candidate for using implausible polls as a pick-me-up, but the effect is universally confusing.
And that's too bad. Polling is central both to reporting and to understanding who we are, why we vote and what we expect out of our political system.
Before last week's election, President Bush derided election "prognosticators," but no one would be so blithe as to dismiss public attitudes, as communicated via ballots or reliable polls.
For the media, public opinion data help flesh out and add context to reporting and expert analyses. Survey data help stories rise above anecdote and provide a broader perspective.
That's why it's tempting to use numbers in stories and arguments and why it's so essential to recognize that all numbers are not created equal. So, what to do?
One vogue approach to the glut of polls this year was to surrender judgment, assume all polls were equal and average their findings. Political junkies bookmarked Web sites that aggregated polls and posted five- and 10-poll averages.
But, perhaps unsurprisingly, averages work only "on average." For example, the posted averages on the Maryland governor's and Senate races showed them as closely competitive; they were not. Polls from The Post and Gallup showed those races as solidly Democratic in June, September and October, just as they were on Election Day.
These polls were not magically predictive; rather, they captured the main themes of the election that were set months before Nov. 7. Describing those Maryland contests as tight races in a deep-blue state, in what national preelection polls rightly showed to be a Democratic year, misled election-watchers and voters, although cable news networks welcomed the fodder.
More fundamentally, averaging polls encourages the already excessive attention paid to horse-race numbers. Preelection polls are not meant to be crystal balls. Putting a number on the status of the race is a necessary part of preelection polls, but much is lost if it's the only one.