Post-election, the discussion of "who got it right" has pretty much begun and ended with Nate Silver. I'm a fan of Silver's, but some other names deserve to appear on the honor roll.  So here's who I trust more now that the election is over.

Pollsters: Nate Silver -- and all the other modelers -- owe their triumph to a simple fact: The pollsters were good at their jobs. The way most of these models work is that, by the end of the election, they're little more than polling aggregators. That means that they're only as good as the polling is. But it turned out the polling was really good. The election results have been framed, sometimes jokingly and sometimes seriously, as a vindication for Silver, but really they were a vindication for the pollsters supplying the raw data. Special distinction here needs to go to Public Policy Polling, which was a bit of an outlier in the final weeks and proved to be exactly right, Internet polls, which distinguished themselves as more accurate than their phone-based cousins, and polling aggregators like Real Clear Politics and, which help you follow the polls without getting too hung up on any one survey.

Modelers and polling aggregators: Silver deserves enormous credit not just for building a model that called the election right, but for explaining, day after day, why his model was making the calls it was. The mixture of the model and the explanations offered a different way of following the election -- one informed by data, rather than the guts of campaign reports and the sensationalizing impulses of homepage editors. But Silver wasn't alone. Princeton's Sam Wang, another modeler, also called the election almost perfectly, and he got the North Dakota Senate race right, which Silver didn't. Emory's Drew Linzer also had a wildly successful year. If you were following this election through any of their models, you were getting good information -- and also seeing, importantly, a more stable race.

Political scientists: Back in April, I worked with three political scientists -- Yale's Seth Hill, GW's John Sides, and UCLA's Lynn Vavreck -- to build an election prediction tool based on data we had in June. We went back this week and ran the numbers: The model predicted the popular vote within a tenth of a percentage point. Some of that is luck, of course, but it's also good evidence that the basic theory of the model was right: Election outcomes are broadly predictable by the summer, which should cast doubt on the volatility that the media emphasizes through the fall.

If I were to sum up the poli-sci view of elections in a sentence, I'd go with this: We underestimate the stability of people's political preferences, we overestimate how much this gaffe or that piece of news will change things, and we forget that most people aren't paying much attention to the day-to-day political scrum. Or, if I were going to sum it up in two words, I'd probably go with: Calm down. That advice was sound throughout the campaign and the basic view that presidential elections tend to be quite stable even as the media tends to make them seem extremely volatile, performed extremely well.

The demographers: For years, there's been a committed core of thinkers making a demography-is-destiny argument about the electorate. Ruy Teixeira and Jon Judis are particularly famous for firing an early salvo with the 2002 publication of "The Emerging Democratic Majority," but Simon Rosenberg and NDN have spent years emphasizing the growth of the Hispanic vote and of millenials, and Teixeira and John Halpin did an excellent (and eerily prescient) job refining the thesis in their report on "The Path to 270." Ron Brownstein, of the National Journal, took this demographic approach and made it the primary lens through which he covered the 2012 campaign. If you were reading him over the last year, you would've had a much better sense of what to be looking for on election night.

The campaign nerds: It used to be that campaigns decided strategy by getting some guys in a room, talking through their ideas about the electorate, and settling on a message that made sense to them. That's not how it happens anymore. Instead, on the Obama campaign, strategy was decided by crunching enormous quantities of data on who might vote for them, where they live, and what they'd respond to. Then appeals were sent out in randomized experiments and the ones that showed themselves to be more successful were used. The best chronicler of this tactical and technological revolution has been Sasha Issenberg, who, in his book "The Victory Lab" and in his Slate columns, has done more than anyone else to track the development of these approaches and report on the devastating mismatch between the modernized campaign techniques of the Obama campaign and the surprisingly old-school approach of the Romney campaign.

I don't want to spend too much time on who I trust less. But go back and look at Wonkblog's pundit accountability post, where we pulled together the predictions made by pundits across the spectrum.

What you'll see is a number of very smart Romney supporters who turned in very bad predictions. I remember the same thing happening to some very smart Democrats toward the end of the 2004 election. So one additional lesson I'd take from the last few months is to be very wary of pundits spinning complicated theories to explain why their favored candidate isn't really behind in the polls. A good writer can, of course, make anything sound convincing, and that's particularly true when they're spinning a tale you already want to believe. But those tales, when they're contradicted by the bulk of the data, are typically wrong, and should be treated with immense suspicion.