Daniel W. Drezner is a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University and a regular contributor to PostEverything.

A sign reading “Vote” is displayed on the side of a booth as a voter casts a ballot at the San Francisco City Hall polling location in San Francisco on Nov. 8. (David Paul Morris/Bloomberg News)

One of the fun things about a surprising political event is watching the reaction of writers with predetermined axes to grind. Don’t get me wrong, all of us have our pet arguments and theories. The most devoted pundits, however, figure out how to graft their favorite arguments onto an explanation for What Just Happened.

Donald Trump’s solid victory in the electoral college certainly qualifies as a surprising political event. Indeed, the fact that it was surprising forms the core of political philosophy professor Jason Blakely’s latest article in the Atlantic, titled “Is Political Science This Year’s Election Casualty?

Blakely clearly believes that the answer is yes: “Election forecasting is part of a wider trend in higher education to present the study of politics as a ‘science.’ But on November 8, 2016, the science of politics was almost uniformly and spectacularly wrong.”

We’ll revisit the claims in that sentence in a second, but for now, let’s ask what Blakely thinks political scientists will and should do in the wake of this alleged epic fail:

Inevitably, higher education and the mainstream in political science will follow the same line of self-criticism. And no one would be surprised if the American Political Science Association (a more-than 13,000-member association of professional, academic political scientists) invites Nate Cohn, Nate Silver, or Sam Wang to a panel discussion next summer to argue about how to make a model and tweak technical differences.

Yet when this happens, the larger philosophical questions — about whether the study of politics is indeed a science — will go unasked, and Americans will have missed a massive opportunity at self-correction in academia, the media, and society at large. …

Humanists across the social sciences, history, literature, and legal studies have argued for decades that politics is not a science but one of the humanities. In the view of humanists like myself, political knowledge is much closer to history than to physics or biology. The reason for this, as the philosopher Charles Taylor famously put it, is because human beings are “self-interpreting animals.” That is, humans are creative agents whose beliefs are held for contingent reasons that can always change, and therefore not susceptible to the causal predictions of the natural sciences. This means demographics, economy, voting history, and the other classifications that political scientists and statistics gurus use to scientifically model predictions are never destiny for human beings.

So, in other words, the author of a book about Charles Taylor thinks political scientists need to listen more to Charles Taylor.

Now as someone who has noted this election cycle that political science theories can be self-refuting, I’m not completely unsympathetic to Blakely’s plea for attention to the discipline’s humanities side. But his entire argument rests on the belief that political science got it wrong in 2016. Let’s revisit that assumption.

Most election prediction models in political science rest on a few core variables: economic indicators, the number of terms the incumbent party has been in power, etc. In August, Vox’s Dylan Matthews reported on what those models predicted for 2016:

Jacob Montgomery of Washington University in St. Louis and Texas A&M’s Florian Hollenbach combined six such models into an “ensemble model” that blends them together based on historical accuracy.

The forecast projected that Republicans would get 50.9 percent of the two-party vote and Democrats would receive 49.1 percent of the vote. As with any model, there’s a margin of error (the 95 percent confidence interval is from 43.61 percent of the vote for Democrats to 53.44 percent), but the point estimate was a GOP victory.

But this is a so-called “fundamentals model”: It isn’t based on specific knowledge of the two nominees or what kinds of campaigns they’re running or how they’re polling each day. Instead, it relies more heavily on data that’s been predictive across past elections: the state of the economy, President Obama’s approval rating, the fact that Democrats are seeking a third term in the White House, and more.

Now the point of Matthews’s essay was that Trump was underperforming those models in the polling. Indeed, Emory political science professor Alan Abramowitz, whose model had the best track recorded, disavowed it back in June because he thought Trump was such an “out-of-bounds” candidate.

As it turns out, Abramowitz wasn’t entirely wrong. Trump did underperform the political science models, all of which focus on the popular vote. As of right now (votes are still being counted), Trump received 49.6 percent of the popular vote and Hillary Clinton received 50.4 percent. So, in the end, the political science models were off by a whopping percentage point in their collective prediction, in that they overestimated Trump’s voting share. Still, as Vox’s Andrew Prokop noted last week, “these fundamental factors all pointed to a very close race that could conceivably go either way.”

Now if you read Blakely’s essay, you quickly realize his ire is focused on the use of polling as a predictor for the election. Even here, however, the average of the national polls showed a narrow Clinton victory in the popular vote. Which is what happened. As RealClearPolitics’ Sean Trende — whose analysis of this election cycle has been stellar — notes:

In fact, despite the hue and cry, the national polls were actually a touch better in 2016 than in 2012. Four years ago, the final RCP National Average gave President Obama a 0.7-point lead; he won by 3.9 points, for an error of 3.2 points. The final RCP Four-Way National Poll Average showed Hillary Clinton winning the popular vote by 3.3 points. She will probably win the popular vote by a point or so, which would equate to an error of around two points. …

What occurred wasn’t a failure of the polls. As with Brexit, it was a failure of punditry.  Pundits saw Clinton with a 1.9 percent lead in Pennsylvania and assumed she would win. The correct interpretation was that, if Clinton’s actual vote share were just one point lower and Trump’s just one point higher, Trump would be tied or even a bit ahead.

So, in other words, if pundits had paid closer attention to the polling and the political science, they would have done better.

The 2016 election was unusual in many ways, but in terms of predicting the outcome, this was the way in which it was particularly unusual:

This gap absolutely merits further analysis. If there’s a failing in political science models, it’s that this is the second time in the past five presidential election cycles that the popular vote and the electoral college don’t match up. The implicit assumption in the political science models is that those two should correlate almost perfectly. Clearly, they might not.

Given his ax, Blakely wants to be angry at political science. The problem is that he’s actually angry at the prediction sites such as the Upshot and the Princeton Election Consortium that radically underestimated Trump’s chances of victory.

Given the outcome, those are fair targets. But they’re not political science targets.

Blakely needs better aim with his ax.