According to this PolitiFact analysis of the candidates running for president (along with a few other politicians), the most accurate among them speak truth or “mostly” truth a bit more than half the time. That would be Sen. Bernie Sanders, clocking in at 54 percent; former secretary of state Hillary Clinton is close behind at 51 percent (see figure above).

One could argue those two are pillars of truth compared to the single digits posted by Ben Carson and Donald Trump. Based on these metrics, one could also bemoan all of our politicians’ seemingly uncomfortable relationship with the truth, with the best of them as likely to serve up a fib as a fact.

Of course, one could also question the scorekeeper. PolitiFact and the other fact checkers provide an important service, but their track record is far from perfect. Their infamous worst move, widely pilloried, was calling a truth (claims by Democrats that Republicans were proposing to end Medicare as we know it by turning it into a voucher program) the “lie of the year.” They once graded President Obama as being only “mostly true” when he was correctly citing job growth numbers from the Bureau of Labor Statistics. I’ve had my own dust-ups with them and other “fact checkers” over the years. But everyone, myself included, makes mistakes.

There’s also the problem in this piece that they’ve subjected President Obama to 569 fact checks, compared to 43 for Sanders, so by the rules of statistics (more on that below) Obama’s 48 percent truthiness rating is far more accurately measured than that of others.

All that said, there’s little question that we are living through a period in which facts are far too often on the run. That’s an existential problem when it comes to the science of climate change. It’s a political problem when the Republican front-runner (Trump) has a true/mostly true score of 7 percent.

And it’s a personal and institutional problem for the many of us in the empirical-evidence business, whose work is all about identifying and elevating facts intended to inform public policy. I’m not saying we — my colleagues and I at CBPP, whose work I know best — are perfect.  I’m saying that, while we come to the analytical table with political leanings, we bend pretty far over backwards to make sure our work is as bulletproof as we can make it re the facts.

There are many layers to why the front-runner of one of the two major parties makes up stuff and then successfully — from his supporters’ perspective — beats up on anyone who points out that the emperor’s got a comb-over naked. But I do think it would be smart and salutary if those of us in the fact business — and I include my right-wing brothers and sisters — took a few minutes away from number crunching and used that time to figure out how to make the world more open to factual discourse.

Perhaps it would help if we learned to be right more often. In that spirit, allow me to add another book to your holiday list: “How Not to be Wrong” by mathematician Jordan Ellenberg. Yes, it’s an important book for our time as Ellenberg provides the criteria by which we might someday find our way back to Factville. But it’s also a fun, stimulating read, accessible to anyone interested in the question of how we can know what’s true, what isn’t and what math’s got to do with it.

The subtitle is “The power of mathematical thinking,” though that’s somewhat of a misnomer, as the power described within is really that of statistical thinking. But be assured, this is no rehash of that stats class you barely made it through x number of years ago. The book is filled with highly engaging historical and contemporary examples, like whether basketball players get a “hot hand” (they do, but you have to look really carefully to find it), or why you can’t figure out how to protect war planes by looking at the ones that survived the battle (you have to look at the ones that didn’t), or whether random clusters of stars are really random, or why a lot of “significant” findings aren’t really significant at all.

I found this last bit really important. Ellenberg takes us through what researchers mean when they say a finding is “significant.” In research, we’re often trying to prove some theory and this oft-used phrase means that the likelihood that the observed data would show what they show if the theory wasn’t true is really quite low. So you can reject the null hypothesis.

Here, to give you a sense of the graceful writing, is how Ellenberg puts it:

“It’s not enough that the data be consistent with your theory; they have to be inconsistent with the negation of your theory, the dreaded null hypothesis. I may assert that I possess telekinetic abilities so powerful that I can drag the sun out from beneath the horizon — if you want proof, just go outside about five in the morning and see the results of my work! But this kind of evidence is no evidence at all, because under the null hypothesis that I lack psychic gifts the sun would come up just the same.”

Ellenberg then provides the best tour I’ve had through all the reasons why “statistically significant” may often itself be misleading. Some statistically significant findings actually lead us to the wrong conclusions, like the study that freaked out British women by showing that an oral contraceptive doubled the chance of blood clots … from 1/7000 to 2/7000. Or consider the religious scholars who find the names of the holiest rabbis embedded in the Torah or the dead fish that reads minds.

The lesson Ellenberg gently draws us to is that in a world with billions of people, a game with thousands of shots on basket, a sky with infinite stars and a Torah with hundreds of thousands of letters, patterns will always show up. And humans, bless our probability-hobbled minds, will see truths in those patterns that are not truths at all. I studied this sort of thing years ago and let me tell you, Ellenberg’s book was a refreshing reality check that I didn’t know I needed.

There are lots of math books that purport to be for the masses and then, by page 5, they’re asking you to picture an n-dimensional matrix in r-space. That doesn’t happen here. Ellenberg is one of those rare people who somehow managed to acquire top-shelf math skills while maintaining the ability to explain things to the rest of us. Even more impressive in that regard, he’s the product of multiplication by two statisticians (i.e., his parents; “Jordan, the Bayesian likelihood that you’ll be allowed to use the car Friday night conditional on your last report card is an asymptotically decreasing function of x”).

For all that, even Ellenberg gets something wrong. In a smart treatise on the pitfalls of linear versus non-linear analysis — projections from straight lines can lead you to some ridiculous conclusions — he gives way too much love to the Laffer curve (he writes, “There’s nothing wrong with the Laffer curve”), a diagram that argues that under certain conditions the government can get more revenue by cutting taxes. When you put it that broadly, there are surely cases where that counterintuitive relationship will be true, but here you’ve got to look at the fiscal evidence, and there’s just nothing there — nada, zip. No car for you this Friday, Jordan.

But that’s the only slip-up I found. The chart above, though certainly flawed, suggests that our policymakers are miles away from Factville. And while that’s tough for those of us in the fact business, it’s a lot tougher from the perspective of an $18 trillion advanced economy facing existential challenges.

I’d love to lock up everyone in political power and not let them out until they’ve read “How Not to be Wrong.” Barring that, read it yourself or give it to someone who needs it. I’ll bet a few people you know come to mind.