Nadia Hassan writes:
The journal PS and Politics recently had a symposium on Presidential election forecasting. Michael Lewis Beck brought up the fact that dynamic models can be updated pretty frequently. But, as you have noted, a lot of survey variation is noise. Indeed, Chris Wlezien told me most polling movement is noise. Is it really a desirable thing to do a lot of updating for dynamic models? As you have noted, a steady stream of survey data is not essential to making pre-election forecasts that predict and explain the outcome of a forthcoming election.
I have several responses. First, I think the current proliferation of polls is ridiculous, and indeed it seems to just encourage poll watchers to chase noise and artifacts arising from survey nonresponse. For example, the solid line in the above graph (reprinted from this recent paper by David Rothschild, Sharad Goel, Doug Rivers and myself) shows the stability of vote preference during the last month or so of the 2012 presidential election.
Also, just to cite myself again (as is the academic way), this paper with Kari Lock discusses the idea that you can forecast elections by separately modeling national opinion and the relative positions of different states or districts.
The second point is that, if all this extra information is available, you might as well use it. But the natural result of the sort of real-time updating associated with Nate Silver and others is that nothing much is going to happen from update to update. This can motivate the presentation of hyperprecise forecasts because people want to see news every day.
Remember that we called the 2010 elections over a year ahead of time (reconfirming a few months later with the unequivocal headline, “The Democrats are gonna get hammered”). Later polls will tell you something in such cases, but not so much.
That said, surprises do happen, and you want your models to be able to allow for this.