About Washington Post Polls

Tuesday, March 31, 2009 9:03 AM

Who do you interview and where do they live?

How are poll respondents selected?

I'd like to participate in your polls, where do I sign up?

Can 500 people, or 1,000 people, or even 1,500 people, represent all Americans?

Why haven't I been called?

How can it be that me and most of my friends think one thing, but your polls say that most Americans think something else?

I want to know exactly what you asked....

How do I know polls work?

How do you decide what to ask?
Don't a lot of people hang up on you, and does that mess up your results?

What's the deal with the margin of error?

Where can I find the hard core, wonky details of Post polling methodology?

Q: Who do you interview and where do they live?

Most, though not all, of our surveys are conducted nationwide. Anyone who lives in the continental United States, has a home telephone, and is at least 18 years of age is a potential Washington Post poll respondent.

We also do regular surveys of Maryland, Virginia and the District of Columbia.

In our national polls our goal is to interview a sample of Americans that closely mirrors the make-up of the country, including people from all parts of the nation, of varying ages, racial backgrounds and party affiliations.

In recent polls we have interviewed, and then with their permission quoted: a 28-year-old mechanic in Columbus, Ohio; a 63-year-old retired government worker who lives in Mooreland, Okla.; and a 40-year-old Manhattanite who works for a health care company.

We get some funny, and occasionally snide, suppositions from readers about who we are interviewing, i.e.: only Bush White House staffers? only Democrats? other journalists? Aside from being outrageously unethical, biasing the sample in any way would be a non-stop route to the unemployment office for us, since our results would immediately stand out (in a bad way) from the other major media polls.

In a typical poll conducted Feb. 6-9, 2003 on the pending invasion of Iraq, we spoke to 1,001 people. Here's how they broke down:

 522 men 479 women 194 Easterners 228 Midwesterners 360 Southerners 219 Westerners 191 18- 30 year olds 285 31 - 44 year olds 319 45 - 60 year olds 189 61+ year olds 71 people who did not finish high school 267 high school grads 289 people with at least some college 364 college grads 311 Democrats 345 Republicans 281 political independents 794 whites 85 African Americans 102 people of another race 

After the data are collected, the results on certain demographic characteristics - including age, race, sex and education - are compared to the most recent Census Bureau estimates and adjusted where necessary. This process is called "weighting" the data, and is standard procedure in public polls to improve the representativeness of the sample.

Q: How are poll respondents selected?

At random, through a process known as Random Digit Dialing, or RDD. The details of the sampling process are a bit complex (those with strong stomachs can click here, but essentially a computer generates lists of working area codes and telephone exchanges (the first three numbers of a phone number) and then makes up the last four digits. This way unlisted numbers are included along with listed ones.

Q: I'd like to participate in your polls, where do I sign up?

Appreciate the thought, but no can do. See previous question.

Q: Can 500 people, or 1,000 people, or even 1,500 people, represent all Americans?

Yes - according to statistical and probability theory - if the right methods are used to choose the sample of people.

Stripped of statistics-speak, sampling the population is like testing the temperature of a bowl of soup - you don't need to eat the whole thing, just stir it up and taste a spoonful or two. Or like taking a blood sample - no need to drain the patient dry, a syringe-ful will do.

One key element of probability sampling, as noted above, is that poll respondents must be chosen randomly. Each adult in the population should also have an equal chance of being included in the poll.

Q: Why haven't I been called?

Every year, about 20 million Americans participate in opinion surveys, one-fourth of these in surveys sponsored by the federal government (and this, of course, does not count that ultimate survey, the decennial census). In a typical year, interviewers working on behalf of The Washington Post might talk to about 25,000 people.

So why haven't you and your opinions made the cut? The simplest answer is that the people who conduct each poll need only contact a very small proportion of Americans to represent the opinions of the whole country. A typical Post survey includes about 500 to 1,200 people.

Of course, you might also be screening us out if you have certain call blocking technology.

Q: How can it be that me and most of my friends think one thing, but your polls say that most Americans think something else?

Most of us live and work surrounded by people who are like us in meaningful ways: similar incomes, similar races, and obviously, similar regional backgrounds. It would be easy to imagine that anti-war feelings ran high across the country if you lived in San Francisco during the Iraq war. Likewise, it would be hard to imagine that anyone opposed the war if you had been living somewhat further south in California's conservative Orange County. That's why we select respondents at random from across the country.

Q: I want to know exactly what you asked....

Of course you do, and you should. Writing loaded questions is the easiest way to bias poll results. But it's also the easiest form of bias to catch: reading the question should suffice. And at least in the major media polls, we've found it to be nearly non-existent (and believe me, we keep a close eye on the competition).

If you have questions about questions, make sure to get a copy of the questionnaire. Questions should be simple and direct. They should avoid jargon or loaded words and phrases. They should offer a reasonable set of answers. Response options should be balanced, so that respondents feel equally comfortable agreeing or disagreeing. Read the questions and make your own judgments.

The Post always posts our data on the website along with the poll story. Look for the link that says "complete poll data."

Q: How do I know polls work?

Polls aren't always a perfect measure of national opinion. There is some error associated with sampling, as well as some caused by such factors as the occasional poorly worded question or large numbers of respondents who refuse to answer. Some of this error is measurable, some is not. Polling is part science, part art.

Those caveats aside, well-designed polls are accurate. One proof: Every four years, pollsters predict support for presidential candidates, and then an election comes along and proves them right or wrong. With a few memorable exceptions (President Thomas Dewey, for example, who existed only in the minds of pollsters), most polls are close to the mark. The National Council on Public Polls reported that the average error of the major polls conducted in the days immediately prior to the 2000 presidential election was 1.1 percentage points. Not bad.

Q: How do you decide what to ask?

We sit down and brainstorm with reporters and editors who are covering the story, be it national politics or the international AIDS epidemic. Once we've decided which topics to cover, we turn to databases of questions that we've asked in the past to see if any of them would be appropriate. Whenever possible we repeat questions over time, the better to track changes in public opinion.

The primary source for trend data (outside an organization's own private archives) is the Roper Center for Public Opinion Research , though you need a subscription to access their material. You can get an idea what searching for "trend questions" is like on a free website provided by the Henry J. Kaiser Family Foundation (with whom the Post occasionally conducts polling projects). Note that the database is limited to health care questions, though.

Q: Don't a lot of people hang up on you, and does that mess up your results?

If anything can keep a paranoid pollster awake at night it is this question, known in the field as the "response rate" problem. It's not just people turning us down that weighs on our minds, but the constant changes in call blocking technology and the growing number of people that rely on cell phones as their primary phone line.

The survey industry - not just public pollsters but the numerous businesses that do commercial market research - are constantly monitoring these trends and attempting to measure their impact on the accuracy of our work.

This all becomes a problem if the people we don't reach are somehow different than the ones we do reach. For example if all Democrats stopped answering surveys then our presidential horse race polls would be outrageously off-the-mark. So far there is no evidence that this is happening.

For more information on response rates click here: About Washington Post Response Rates.

P.S.: Please do not hang up on us.

Q: What's the deal with the margin of error?

A survey's "margin of sampling error" tells you how far off the mark you could be in your estimate due to the fact that you talked to a sample of the country's population instead of calling every single person. This is the part of total survey error that is actually measurable, thanks to the laws of statistics. There are other forms of error whose effects can't be measured as precisely.

An example of how to interpret a given margin of error (MOE): Say a survey finds that 73 percent of the public back the invasion of Iraq, and the survey's margin of sampling error is plus or minus 3 percentage points. This means that 95 times out of 100, you would get a number between 70 percent and 76 percent, with 73 percent the best current estimate.

The size of the sample is the primary factor affecting the size of the MOE, though not the only factor. The chart below provides an average figure for various sample sizes (assuming, of course, that the sample has been selected at random.) Reading this, you can see why pollsters don't often have samples above 1,500 respondents: even if you get a significantly larger number, you don't decrease your margin of error very much.

(square root (.25/number of respondents)) * 1.96.

In words: First, divide .25 by the number of poll respondents. Second, take the square root of the result. Third, multiply that result by 1.96.

Q: Where can I find the hard core, wonky details of Post polling methodology?

To spare the casual reader the tedium, we've put the nitty gritty here.

20 Questions You Should Ask About Any Poll

If you've gotten to this point, you might as well go the distance and check out this good resource: 20 Questions a Journalist Should Ask About Poll Results. Written by Sheldon R. Gawiser of NBC News and G. Evans Witt of Princeton Survey Research Associates, it sets out the things you need to know in order to be a smart consumer of survey research.

The questions cover the basics: "How many people were interviewed for the survey?" (Q.3), "Who should have been interviewed and was not?" (Q.7) They also explain the problems with Internet and so-called "push" polls.

Our favorite query was Q.19: "With all these potential problems, should we ever report poll results?" Yes, they assure us. "In spite of the difficulties, the public opinion survey, correctly conducted, is still the best objective measure of the state of the views of the public."


Poll Watchers: Can't get enough chat about polling?

Other sources:
National Election Studies
General Social Survey
Roper Center for Public Opinion Research
National Council on Public Polls (NCPP)
Polling Report
Public Agenda
The Henry J. Kaiser Family Foundation
VCU poll
The Gallup Organization
The Pew Research Center

Related professional organizations:
American Association for Public Opinion Research (AAPOR)

View all comments that have been posted about this article.

© 2009 Washingtonpost.Newsweek Interactive