Breaking a decades-long tradition, Gallup, the best-known name in polling, has decided not to conduct polls of support in the 2016 presidential primaries. They may skip asking how people will vote in the general election as well. 

Gallup’s decision, first reported by Politico, comes nearly three years after its criticism of its 2012 polling accuracy and raises important questions about the role and capabilities of of polling in elections. Here are some of the answers.

What's different?

First, Gallup was the most prolific pollster in 2008 and 2012, publishing new polls nearly every single day tracking national support during the primary and general election contests. The sheer size of data being collected was gigantic, with well over 100,000 interviews conducted in 2012, far more than other public telephone surveys which are conducted once or twice a month.

Second, Gallup built its historical reputation for accuracy on pre-election surveys, but in 2012 their results raised doubts about the poll's methodology. Gallup's final general election tracking poll found Mitt Romney one percentage point above President Obama (49 to 48 percent), which it termed a “statistical tie" given the poll's 2-point margin of sampling error. When votes were counted, Obama won by four points, 51 to 47 percent.

The disparity was not much different than in 2008 nor much beyond the margin of sampling error, but a detailed post-election review revealed several systemic factors that biased Gallup's estimates in Romney’s direction. One of the problems Gallup identified -- the way it measures race and ethnicity and weights to match the population -- was criticized in a Huffington Post investigation of the firm's past surveys in June 2008. 

[Gallup explains what went wrong in 2012]

Obama's campaign leaders were highly critical of the firm's polling during and after the campaign, and campaign manager Jim Messina offered a derisive reaction to news Gallup will sit out 2016 horse race surveys.

(Disclosure: Co-author Scott Clement was involved in Gallup’s 2013 election poll experimentation in Virginia and New Jersey governor elections as a graduate student at the University of Maryland. Clement was not paid to work on the projects).

Why is Gallup scaling back?

Gallup editor-in-chief Frank Newport tells Politico that the organization has ‘shifted its resources into understanding issues facing voters.” In a separate e-mail to us, Newport offered a fuller explanation of Gallup’s reasoning:

In the 2012 cycle we invested a huge amount of time, money and interviewing in tracking the horse race on a nightly basis. Our question in this cycle: is this the best investment of resources to fulfill the mission of helping understand what is going on in a presidential election and hopefully helping make the nation better off as a result. Our thinking is that it is not; that tilting those resources more toward understanding where the public stands on the issues of the day, how they are reacting to the proposals put forth by the candidates, what it is they want the candidates to do, and what messages or images of the candidates are seeping into the public’s consciousness can make a more lasting contribution. This may not be the focus that gets the most “clicks” or short-term headlines, but is one which hopefully can make a real difference.  Again, this isn’t based on a lack of faith in the process or the value of horse race polling in general, but rather a focus on how our particular firm’s contribution to the process can be most effective in keeping the voice of the people injected into the democratic process.

In short, Gallup sees a better payoff in conducting surveys on issues beyond who’s ahead and who’s behind. Gallup’s publications this year offer a window into what those types of polls will cover -- tracking favorable ratings of Hillary Clinton and  Donald Trump, one-word reactions to the candidates and which party is trusted to handle different issues.

By stepping away from horse race polls, Gallup will also miss out on the scrutiny of final pre-election polling compared to reality, a relatively rare instance when a polling method can be assessed against external benchmarks.

Gallup insisted that concern about accuracy was not a reason for backing away from horse race polls. “This is not really the issue,” Newport said. “We did an exhaustive review. Since then we did additional experiments in 2014. It was quite accurate on the generic congressional horserace. I have little doubt that we would be accurate in 2016.” In addition, Newport cited the firm’s tracking of the percent of Americans without health insurance and the unemployment rate against government benchmarks as a validation of their methodology.

What does this mean for election watchers and the polling industry?

Gallup’s retreat from horserace polling may have only a minor impact on 2016 polling in general. There will be a glut of national election polls -- at least once a week in primaries and more often in the general election -- as a variety of other pollsters are maintaining their polling of candidate support (including our own Washington Post-ABC News poll). Without Gallup, there will almost certainly be national telephone surveys producing daily tracking of candidates’ support. Online polls may jump into the mix too.

The drawdown in daily horse race numbers may be healthy for all of us. (Editor's note: Speak for yourself.) The random noise in daily tracking polls (even within margin of error) can give the impression that support for a candidate is fluctuating more than it is in reality. Gallup’s focus on measures of favorability and views of Congress may offer a more valuable understanding of attitudes than a daily beat on “who’s winning now?!?!?!”

Newport offered a nod to the longer term implications just after Gallup's 2012 election problems. After that election, Newport suggested political polling faced a reverse “law of the commons” dilemma in which firms like Gallup pay for expensive polls but attention is paid to poll aggregation, which is done more cheaply. Newport claimed that a rational decision for organizations is to aggregate surveys rather than bear the expense of producing them -- though if lots of pollsters decide to go that route we may have a dearth of quality surveys to aggregate. 

The saturation of horse race polls is clear from HuffPost Pollster’s tracking of the national Republican primary race. HuffPost is averaging polls from 32 pollsters to plot trend lines about the race. Among those pollsters, only about half of them are from traditional phone polls that call landlines and cellphones, as Gallup does. The rest come from nontraditional methodologies such as nonprobability online samples or automated caller polls that use recorded voices asking people to respond.

Gallup has decided to pivot to a different model and instead focus on trying to suss out the substance rather than just report the score of the game.