This post has been updated to include results from the Kansas Republican caucuses.
Carl Diggler is quite a character. He is a 30-year journalism veteran and air-hockey ace who somehow attended women-only Wellesley College and once got his tie stuck in a Wawa hot dog roller.
Diggler — in case you haven't heard of him, or it isn't obvious — is also fictional. He's the creation of two clever writers at Cafe, Felix Biederman and Virgil Texas, who conceived him as a way to "satirize all that is vacuous, elitist and ridiculous about the media class," as Texas explained on the PostEverything blog this week. The name of Diggler's column, The Dig, may or may not connote that of a certain other Washington Post blog that you may or may not be reading at this very moment. And Diggler may or may not enjoy taking occasional Twitter jabs at said blog.
— Carl Diggler (@carl_diggler) May 11, 2016
— Carl Diggler (@carl_diggler) March 10, 2016
But Diggler's favorite target, by far, is Nate Silver's FiveThirtyEight, the data-driven politics (and sports) analysis site that correctly predicted the winner in all 50 states during the 2012 presidential election and 49 out of 50 states in 2008. With Trumpian bombast, Diggler — who launched a site called SixThirtyEight this week — routinely mocks Silver as a "loser" and a "coward" and boasts that his own "unique combination of gut, instinct and racial science" has been more accurate than Silver's poll-based modeling during the 2016 primary season.
It's all good for a laugh, and Diggler's satirical columns are often all-too-true commentaries on political bloviating. much like the rantings of Stephen Colbert's Comedy Central character used to be.
The problem is that some real media outlets are actually buying into the idea that Diggler is more accurate than Silver. Check out these headlines:
The most accurate media pundit this election season is a parody account (Complex)
Day of the Diggler: the media's most accurate political pundit is a joke (International Business Times)
How a totally fake pundit beat the pros (Vocativ)
The secret sauce behind a parody pundit's predictive powers (FishbowlDC)
This is nonsense, which should be obvious. But apparently it's not. So let's treat the accuracy question as a serious one, for a moment.
Diggler has indeed made some great calls. He nailed Bernie Sanders's upset win in Indiana last week and got 20 of 22 Super Tuesday contests right. But the Diggler character brags about beating Silver at a game (pick the winner of every primary and caucus) that Silver isn't even playing.
Diggler has padded his stats, so to speak, by picking winners in places like Guam, American Samoa, the Virgin Islands and Puerto Rico — places where so few delegates were at stake that hardly anyone in the press paid attention to the races, but where the winners might be pretty easily pre-judged anyways. He also made picks in contests where conventional wisdom made the likely winner pretty obvious (i.e. Sanders in Kansas and Maine, Ted Cruz in Idaho and Wyoming) but where there wasn't enough polling data for FiveThirtyEight to build its "polls-plus" and "polls-only" predictive models.
In any race where Silver didn't make a pick — again, he isn't playing the pick-every-contest game — Diggler's numbers treat the non-prediction as a wrong prediction. So Silver's accuracy rate through May 7, according to Diggler, was just 55 percent, while his own was 89 percent.
You don't need to be Nate Silver to figure out that's a pretty skewed manner of keeping score.
Writing for PostEverything, Texas put the emphasis on raw numbers, rather than percentages, when he stated that Diggler has "predicted more correct primary results than Nate Silver." Yes, and in 2000, Steve Flesch made more birdies than Tiger Woods — because he played 41 more rounds of competitive golf. Was Woods, who won three major championships that season and was universally recognized as the best golfer in the world, a "coward" because he didn't enter as many tournaments as Flesch?
When FiveThirtyEight has modeled this year's primary races, the predictions have been highly accurate.
538's "Polls-only" model is up to 52/57 (91%) correct "calls" this year, and "polls-plus" up to 51/57 (89%), after Sanders's win in WV.
— Nate Silver (@NateSilver538) May 11, 2016
Diggler's percentage is as good, or almost as good, which is undeniably impressive — particularly for a fake pundit. But keep in mind that many of these predictions are gimmes. Even Silver's near-perfect record in the last two general elections isn't quite as amazing as it seems. Most states are reliably red or blue; he really only had to make tough calls in a handful of the most competitive swing states.
In political prognosticating, as in golf, the difference between being very good and being the best is slim. Steve Flesch was a darn good player in 2000. On most holes, he and Woods got the same scores. But head-to-head, Woods was better, and everyone knew it.
How has Diggler fared head-to-head against Silver? In most races, when both made picks, they made the same picks. And when their predictions have diverged, each has been right four times. So far, it's a draw. SixThirtyEight's side-by-side comparison originally omitted the results of the Kansas Republican caucuses, which Diggler got right and Silver got wrong. Diggler's database — and this post — initially reflected a 4-3 advantage for Silver.
Even if Silver did hold that slim lead over Diggler (which, again, it turns out he doesn't), it wouldn't be a huge difference. But we shouldn't expect the difference to be huge. Silver's modeling isn't highly regarded because it is way more accurate than everyone else's projections. Most people who pay even moderate attention to politics could correctly predict the winners of most races. Silver's cachet is that in the relatively small number of contests where the outcome is unclear, he is better than most at picking the winners — and has been for a while.
Carl Diggler is a fun story and an entertaining read. But let's not be silly and pretend like he's some kind of proof that data journalism and poll-based prognostication is B.S.; he isn't more accurate than Nate Silver.