Reelected Brazilian President Dilma Rousseff (center) blows kisses flanked by her vice president Michel Temer (left) and former Brazilian President Luiz Inacio Lula Da Silva following her win, in Brasilia, on Oct. 26, 2014. (Evaristo SA/AFP/Getty Images)

The following is a guest post by political scientists Francisco Cantú (University of Houston), Marco A. Morales (Instituto Tecnológico Autónomo de México) and Felipe Nunes.
*****

With a narrow enough victory – 2.9 percent of the vote – Dilma Rousseff (Workers’ Party, or “PT”) was reelected as president of Brazil on Sunday for an additional four-year period. This was the closest election in Brazil since its return to democracy in 1989, and sets the course for 16 years of uninterrupted PT government.

As the closely fought campaign picked up steam, polling came under fire at different points with claims of partisan bias. Doubts carried over to the runoff election, especially since it started in a dead heat. Given the average sample sizes of electoral surveys conducted during the runoff campaign – around 2000 respondents – the outcome of the election was poised to fall within the margin of error of plus/minus 4 percentage points. Yet, as the runoff election approached, a number of polls fell well outside this margin of error, on one occasion giving challenger Aécio Neves (Social Democrats, or “PSDB”) a 15 point advantage.

In elections with a runoff, like in Brazil, we get the opportunity to see how polling firms perform when there are more than two candidates, and then compare this performance to situations where there are just two candidates.

In a bipartisan contest, the overestimation of one candidate is equal to the underestimation of the other candidate (excluding undecideds). But things get more complicated when there is more than one candidate, since a polling firm might estimate correctly vote intention for some candidates, but not for others. For example, in a four-way race, a polling firm may estimate correctly just one, two or all of the candidates. Knowing which ones are estimated correctly and which ones are not is the tricky part.

In a previous post, we had computed the bias with which each polling firm estimated vote intention throughout the general election. That provided us with a baseline for comparison. Using the same method, we ran a similar analysis on the runoff campaign. Tracking “true” vote intention during the runoff we see that Neves started with a slight advantage, but the tendency reverted midway through the runoff.

Polls in September and October 2014 track the voting intentions of Brazilians. Figure: authors
Polls in September and October 2014 track the voting intentions of Brazilians. Figure: Authors

Perhaps, most interesting are the computations of systematic bias in the estimates for each candidate by each firm. During the runoff, Datafolha and MDA did a particularly good job estimating both candidates, while Ibope and Vox Populi slightly underestimated Neves and overestimated Dilma.

Figure: authors
Figure: Authors

All this provides us with information to asses the performance of Brazilian polling firms during the general election and during the runoff. In the figure below, we plotted our estimate of each firm’s bias for each candidate and the range where 95 percent of estimates would fall for the general election and for the runoff. We see that all firms did a much better job during the runoff than during the general election when they all underestimated Neves and overestimated Dilma. Yet, the most notorious improvements were observed in Datafolha and MDA who reduced their systematic bias almost completely during the runoff.

Figure: authors
Figure: Authors

Polling is a statistical exercise subject to uncertainties. Some of these uncertainties are the product of bad draws and should (theoretically) cancel out in the aggregate. Some others are the product of factors that make them fixtures and not disappear. The method we employed here captures the second type of factors that are detectable as systematic bias throughout the campaign. These biases could result from sampling techniques or from peculiarities on the field, and hence susceptible of appearing in any country and to happen to any pollster. With these estimates we seek to contribute to a more robust global polling by pointing out where biases exist and where they do not.