About Washington Post Response Rates

Response rates are a way of measuring how well you do in contacting your target population.

The simplest way to think about the way they are calculated:

                number of people you actually survey
RESPONSE RATE = -------------------------------------
                number of people you tried to survey

(Actual calculations are much more detailed, of course. If you want to take a look at one of the industry's standard response rate calculators, you can find it on the website of the American Association for Public Opinion Research.)

There are two primary components to a response rate: a contact rate and a cooperation rate.

The contact rate measures the proportion of numbers dialed at which you actually reach a live person, as opposed to getting a busy signal, endless ringing, or an answering machine.

Numbers that initially appear to be home phone lines but turn out to be business numbers, fax machines, etc. are not a part of the target sample and therefore not counted against you in calculating the response rate.

Cooperation rates start with the number of actual live people you reach (aka contact), and then measure the proportion that actually agree to take the survey (aka, weren't cooking dinner, didn't hang up on you, etc.)

So, if you started with a sample of 1,000 randomly selected phone numbers, and you reached someone at each phone number and each of them agreed to take the survey, you would have a response rate of 100 percent. Needless to say, this never happens, even to the most thorough of survey researchers.

Response rates are one of the most closely studied issues in survey research, though individual organizations have often kept their rates fairly private. Until recent years, there was no agreed upon definition for reporting those rates. This led to a good deal of response rate inflation and a real lack of comparability. The survey research industry is now moving toward standardizing them.

Accumulated evidence indicates that response rates are indeed falling over time. Thanks to the proliferation of call blocking techniques and the growing use of home phone lines for Internet access, it is becoming more difficult to contact respondents at home. Ditto on cooperation rates. Flooded with calls from salespeople, Americans are becoming increasingly loathe to trust that a stranger on the phone is indeed just trying to measure their opinions, no purchase required.

The $64,000 question, of course, is whether lower response rates translate into lessened data quality. This in turn hinges on whether the people that you don't contact are different in some important way from the ones that you do contact.

So far, studies of the topic have not turned up much in the way of major differences. See, for example, the power point presentation "Response Rates in Surveys by the News Media and Government Contractors" on Ohio State professor Jon Krosnick's home page, or this study by the Pew Research Center, soon to be updated. Obviously response rates are something that pollsters will continue to monitor obsessively.

What factors tend to result in higher response rates? Being only slightly flip: time and money. The longer a survey stays in the field, the more times interviewers can dial each number, and the more likely they are to find a respondent at home. Staying in the field a number of weeks and calling each number 15 times or more (a standard practice for government-sponsored surveys) is one of the most dependable ways to increase response rates, though it's also costly.

One concern about releasing response rates is that the average poll consumer won't know how to interpret them. What is a good response rate? How low can a response rate go before it affects the quality and representativeness of the study?

As you might guess based on the 'time and money' criteria, rates vary widely across various types of surveys. The academic study cited above used a variety of calculations, from the very strict to the fairly lenient, and found response rates for government contractors ranging from 19 percent to 87 percent. Media outlets, on the other hand, varied from 4 percent to 51 percent.

Why the difference? Primarily because media surveys that are reporting on attitudes to current political and policy developments don't have the luxury of staying in the field for two months. The results would simply be useless. For an interesting discussion of the tradeoffs involved, see "About Response Rates", by our colleague Gary Langer at ABC News in the May/June 2003 issue of Public Perspective.

Below are a sample of response rates on a variety of Washington Post polls.

  August monthly political survey Virginia statewide survey: Governors race September monthly political survey Survey of hurricane evacuees in Houston shelters
Population surveyed National gen pop Virginia registered voters National gen pop Evacuees residing in major Houston shelters
Mode Telephone Telephone Telephone In Person
Number of Respondents 1,006 1,036 1201 680
Sponsors Post-ABC News Post Post-ABC News Post-Kaiser-Harvard
Dates conducted Aug. 25-28, 2005 Sept. 6-9, 2005 Sept. 8-11, 2005 Sept. 10-12, 2005
Fieldwork by TNS TNS TNS ICR
Avg. Interview length (in minutes) 15 13 14 18
Response rate 32% 29% 37% 92%

Do we wish these were higher? Yes. Do we check the representativeness of our sample against Census figures and other survey outcomes? Yes. At this time do we see any evidence that our surveys are unrepresentative of the populations we are trying to study? No.

© The Washington Post Company