Margin for Error in Reporting on Polls

By Andrew Alexander
Sunday, July 26, 2009

You don't need a survey to know that Americans love polls. Election polls. Issue polls. Polls on whether Michael Jackson was murdered. Polls on boxers or briefs for men.

Polls are proliferating, and so are problems with the way they're conducted and reported. The Post is taking steps to tighten controls on both.

Several weeks ago, the news staff was given updated standards that spell out do's and don'ts.

The goal, said Post polling director Jon Cohen, is to remind reporters and editors that "not all polls are equal." While many are statistically valid, others are pseudo surveys masquerading as serious scientific research. And some are pure hokum.

"The numbers [of polls] are going up," said Richard A. Kulka, until recently president of the American Association for Public Opinion Research, which promotes quality polling. "It's cheaper than ever to do them . . . especially if you don't use accepted probability standards."

Public opinion research has been an important part of The Post since 1935, when it started carrying "America Speaks," a new syndicated column by polling pioneer George Gallup. The paper even paid for a blimp to advertise it above Washington.

Today, The Post is one of a handful of American newspapers with a major commitment to its own polling. It conducts a national poll as often as once a month in a decades-long partnership with ABC News. The Post's methodology is available on its Web site, including the wording of all questions.

The updated standards -- Cohen said they will eventually be made public -- are all about credibility.

"We need to be as skeptical about numbers as we are other types of information," managing editors Liz Spayd and Raju Narisetti wrote in a note to the staff accompanying them. "Just because we see a number and a percentage sign assigned to something does not make it fact."

Quality news organizations such as The Post know to ask baseline questions to establish a poll's reliability. Who paid for the survey (are they pushing a point of view)? How were the questions worded (were they leading)? How was the sample chosen (was it truly random)? How large was it (100 or 1,000)? What's the margin of error (it can be statistically significant)? Who was surveyed (for an election poll, were they "likely" voters)? When were they surveyed (was it before the leading candidate's big gaffe)? How were they questioned (personal interview, automated phone response or on the Internet)?

But Post reporters are being asked to probe deeper and to be especially wary of unproven new polling techniques.

Among them are "click-in" polls, where participants respond to survey questions online. They are widely considered unreliable because the sampling isn't random.

Likewise, there is concern about telephone "robopolls," where a recorded voice prompts people for responses ("press 1 for yes"). They've shown promise. But unlike a live phone interview, there's no way to verify whether an 8-year-old is on the line pushing the buttons.

Leading up to last month's Virginia Democratic gubernatorial primary, The Post refused to report on three campaign polls because of concern about their reliability. Two were "robopolls." The campaign releasing the other poll would divulge only four of 39 questions.

"By publishing numbers of uncertain quality or ones lacking essential context, we amplify those findings, and risk misleading you," Cohen wrote in a Metro section piece explaining the decision.

Refusing to report on a flawed poll can put The Post at a competitive disadvantage because the poll will surely show up elsewhere. But, Cohen said in an interview last week, "I would argue that integrity is a competitive advantage."

As The Post worries about the validity of outside surveys, it also frets about some of its own. The Post's Web site routinely carries unscientific "polls" to spur reader interaction. A weekly feature called "Friday Follies -- Totally Random Polls" invites readers to vote on lighthearted questions. They can view a running tally of responses. After D.C. Council member Marion Barry was arrested on a stalking charge that was later dropped, nearly half of 519 who participated in the "poll" agreed, "He was clearly up to no good."

"People have been encouraged to include little 'polls' as part of blog posts, or in other ways, to increase [online] engagement," Cohen said. But the new standards insist that they include this advisory: "This is a non-scientific user poll. Results are not statistically valid and cannot be assumed to reflect the views of Washington Post users as a group or the general population."

As always, transparency is key.

Andrew Alexander can be reached at 202-334-7582 or at

View all comments that have been posted about this article.

© 2009 The Washington Post Company