But this begs another question: which pollster or pollsters were the most accurate in 2016? As he did in 2008 and 2012, Fordham University political scientist Costas Panagopoulos has tabulated the accuracy of the 14 individual national polls that reported results from the last week of the campaign (November 1-8).
The measure of accuracy derives from the research of Elizabeth Martin, Michael Traugott, and Courtney Kennedy. (Wonky interlude: the measure is logarithm of the ratio of Trump to Clinton shares in the poll divided by ratio of Trump to Clinton shares in the actual vote.)
In this case, values lower than 0 indicate that the polls overestimated Clinton’s share of the vote. A value of 0 would indicate perfect accuracy. Here is a graph that I made based on Panagopoulos’ results:
The pollsters that were closest to the final result included McClatchy/Marist and IBD/TIPP. But others were also close — including the ABC News/Washington Post tracking poll. (Note: Although you are reading this on The Washington Post’s website, neither Panagopoulos nor I had any involvement in The Post’s tracking poll.)
Given the sample sizes and underlying margins of error in these polls, most of these polls were not that far from the actual result. In only two cases, reports Panagopoulos, was any bias in the poll statistically significant. The Los Angeles Times/USC poll, which had Trump with a national lead throughout the campaign, and the NBC News/Survey Monkey poll, which overestimated Clinton’s share of the vote.
In other words, most individual polls were pretty accurate — which is why the averages were pretty accurate, too. So the graph above doesn’t crown any champion. Moreover, comparing the results in 2016 and 2012 suggests that no one pollster was consistently “the best” in both years.