By Richard Morin
By Richard Morin
America is awash in presidential polls and reporting on presidential polls. Some of both have been very good; some have been very, very bad.
While others labor to improve the polls, University of Michigan political scientist Michael Traugott works to improve the quality of reporting on surveys. A second edition of the book he wrote with Paul Lavrakas of Ohio State University, "The Voters' Guide to Election Polls," just came out.
Last week at the Brookings Institution in Washington, D.C., Traugott offered guidelines to reporters who write about politics and polls. Among his suggestions:
First things first.
"The foundation for drawing sound conclusions is information based upon appropriate methodology," Traugott says.
Okay, but how do you tell "good" methods from bad? Traugott admits it's hard, but offers two hints to keep the worst of polls out of the news. Always be wary of data obtained from group of self-selected volunteers a type of survey that includes most Internet polls and the telephone call-in surveys so popular in recent election years. Instead, rely on surveys based on probability sampling designed to produce a true random sample of the population.
And pay close attention to question wording. "It is always the case that there is more than one way to ask a question," he says. "But red flags suggesting care in accepting poll results at face value include the use of key phrases or strong adverbs or adjectives that would tend to direct many respondents to one response category or another."
Who's picking up the tab?
Be wary of surveys sponsored by partisan groups. "Sponsorship of the study can indicate the potential for misleading information, relative to the views the public might express under another set of survey conditions," Traugott writes. One big reason to avoid partisan or special interest polls is that the public doesn't believe them. People are less likely to believe a poll sponsored by a group with a known point of view such as the Democratic Party or the National Rifle Association than one from an independent organization such as Gallup, according to Lavrakas, who also was at the Brookings session. (One notable exception: People believe results of a partisan poll if those results are in opposition to the group's interests. Thus they are far more likely to believe a poll sponsored by Democrats that find Republican George W. Bush more popular with voters than Democrat Al Gore, but not a Democratic poll that finds Gore ahead.)
"Whenever poll data are reported, important elements of the details of the data collection should be disclosed to enable consumers to make their own judgments about data quality and interpretation." That means, at a minimum, reporting sample size, margin of sampling error, who sponsored the poll and who did the actual interviewing and analysis, when the poll was conducted and question wording on key questions.
Other details that should be known, at least by the reporter: the response rate (the percentage of people who actually agreed to be interviewed), the order the questions were asked and the size of subsamples. "While information about methodology does not have to appear in the body of a poll-based story, it should form the basis for determining whether or not a poll-based story should be produced," Traugott says.
Good stories are in the details.
Most reporters simply report the marginals the particular percentage of respondents who gave a particular answer to a specific question. Often the real news is just beneath this number: While about half the country approves of the job Clinton is doing as president, an overwhelming majority of Democrats like him, a majority of Republicans don't and most political independents generally approve. Traugott advises going beyond simple demographics gender, age and race to report how partisanship and prior voting history often tell dramatically different stories. (One caution: Be careful about implying causality. The fact that "two variables are related does not alone imply that one is the cause of the other," Traugott says.)
Context, context, context.
Don't ignore other polls, past or present. Publishing the results of other surveys on a specific topic taken at the same time and asking the same questions is a way to build credibility. "This will eliminate some of the 'gee whiz' quality of current reporting about public opinion, as though every reported finding were new and unexpected," Traugott writes. Likewise, comparisons with the results of similar data collected in past surveys is invaluable in capturing changes in public sentiment, he says.
Finally, Traugott advises journalists not to forget they're reporters. Polls are one tool a very powerful tool for analyzing a reporting public opinion. But they are not the only tool. "The reporting of public opinion is easier for the public to understand and interpret when poll-based stories are combined with other more traditional forms of reportage," Traugott says. He urges that reporters use "stories about real citizens who exemplify the groups analyzed in the poll," either through person-on-the-street interviews or in focus groups composed of individuals who share characteristics of the survey respondents.
© Copyright 1999 The Washington Post Company