wpostServer: http://css.washingtonpost.com/wpost

The Post Most: Politics

Read In

Now Viewing: People from around the country looking at Post Politics section

See what's being read across the country ›

Social Surface: Politics

Behind the Numbers
Posted at 04:00 PM ET, 02/16/2012

Covering automated surveys in 2012

Before this election cycle, The Washington Post avoided highlighting any result from automated surveys – polls that use computers to dial landline telephone numbers and have a recorded voice ask questions of whoever answers.

The methodological shortcomings of these polls are clear – and growing as the number of adults abandoning home telephone service leaps higher. (Federal law bars robopolls from calling cellphones.)

Late last year, however, we modified our approach to these surveys. We now include contextual coverage of horse-race results from automated polls, because they have – in some cases – been used by pollsters to rack up impressive track records with end-of-campaign election predictions, and because they are an essential part of the political debate.

We’re not using them for other things.

After all, getting accurate results close to an election (plus or minus) is a necessary but insufficient condition for a valid poll. Good forecasts can come from bad models. But our shift on horse-race results recognizes that campaign polling is only theoretically like survey research in other areas.

There’s a crucial twist in horse-race polls, in which snapshots of public opinion are “modeled” to reflect a hypothetical future population: voters on Election Day. Perhaps it’s possible to model one’s way around flaws in campaign polls (at least until an election turns out to be different from previous ones).

Michael Traugott, Michigan professor and former president of the American Association for Public Opinion Research (AAPOR), says his views have “evolved somewhat” since he first labeled automated polls as CRAP, for “computerized response audience polls.” But he notes the root problem: “Modeling seems to work in estimating elections, but we don’t know how or why,” he said. “And ,of course, we have no idea how it applies to other measures of attitudes and policy preferences.”

In our updated approach, we make limited use of horse-race results from automated polls, but we don’t use such data as reliable, broader gauges of public attitudes.

Even if “getting elections right” seems to be a a good measure of accuracy, it’s a big leap to other areas of survey work. As Peter Miller, emeritus professor at Northwestern University who recently served as AAPOR president, describes: Automated polls “rely too much on assumptions to make estimates based on data from an increasingly unrepresentative part of the population. Heroic assumptions will lead to big, unpredictable errors.”

Our aim is to minimize this risk. We can’t tolerate large, perhaps surprising biases in handicapping elections or a fundamental misunderstanding of voters and political trends. Our mission is to try to truly understand what voters (and non-voters) really want.

More from the Post polling team

Sign up for Post polling e-mails

Follow Post polling on Twitter

Like Post Politics on Facebook

By  |  04:00 PM ET, 02/16/2012

 
Read what others are saying
     

    © 2011 The Washington Post Company