The stable political system in the United States, together with the high frequency of polls, favored a model based on few assumptions. By contrast, Chile’s unstable political system, together with the low frequency of polls, forced us to build a model with additional assumptions.
Yup. Of course, assumptions can go wrong, but that’s okay: When assumptions go wrong, that’s when we learn something. No pain, no gain.
But here’s the part I don’t quite agree with. Bunker and Bauchowitz write:
Some critics argue that forecasts made by poll aggregators should not be compared to predictions made by pollsters. We believe the contrary; poll aggregators and pollsters are essentially at odds. They compete against each other to get the numbers right.
I do agree with the above passage in the following sense. If pollsters release something that they call a forecast, then, sure, it’s only fair to treat it as such, and if you can beat the forecast, you deserve credit for it. But more generally, a poll is a snapshot, not a forecast. A poll can be useful in constructing a forecast (see, for example, our recent post, “Republicans on track to retain control of House in 2014,” which is based on work of Bafumi, Erikson and Wlezien to forecast midterm election results many months ahead of time given generic-ballot polls), but I think it’s important to separate the two things:
At the first stage, a poll is a snapshot. It can be a good snapshot or a bad snapshot (for example, because of problems with question wording, sampling or nonresponse).
At the second stage, various information, including snapshots, can be combined to make a forecast. As Bunker and Bauchowitz say, this is not always so easy, especially in an environment with an unpredictable outcome and sparse information. Successful forecasters deserve credit, but I see what they’re doing as making use of the polls, not competing with them.