Are female-named hurricanes more deadly than male-named hurricanes? This one is a satellite image of Hurricane Michael on Nov. 28, 2012.

Did you hear this one? Hurricanes with girls’ names are more destructive than those with boys’ names. (It’s not true.) Or this one: A policy in China generated so much air pollution that it shortened the lives of half the people in the country by an average of five years. (Nope, the claims are based on extrapolating from an implausible model.) Okay, what about this one: Subliminal smiley faces lead to big changes in political attitudes. (Yup, no real evidence for that one either.)

All these are examples of a problem facing policymakers: Week after week, they are bombarded with numbers purporting to come from scientific studies. These numbers might appear in respected journals such as the prestigious Proceedings of the National Academy of Sciences or be featured uncritically in the New York Times or other media outlets — and yet are still wrong and can sometimes even be ridiculous, as in the hurricanes example.

Political science, and social science more generally, supposes that policy should, where possible, be driven by empirical research. But when the empirical research is flawed, it can be used as an excuse for policies that are at best lucky guesses, likely to be mistaken, and at worst can be vehicles for corruption.

Hence it can be helpful when policy-minded scholars remind us how flawed research can misdirect policy. Unlike some widely reported scandals, none of these studies were based on fake data; they just represented unwarranted generalizations, researchers and publicists jumping to conclusions based on data that were not strong enough to really answer the underlying questions.

Which brings us to a useful post by medical researcher Stephen Soumerai and sociologist Ross Koppel (sent to me by Mark Tuttle) explaining how shoddy research can misdirect health-care policy:

Long before Congress created the Health Information Technology for Economic and Clinical Health (HITECH) Act, giving $32 billion to health care providers to transfer to Electronic Health Records (EHR) vendors, plans for that windfall were created by Health Information Technology (HIT) vendors, HIT enthusiasts, and friendly politicians (like Newt Gingrich).

The plans included an enormous lobbying campaign. Congress responded obediently. Most commentators focus on that $32 billion for the HITECH Act’s incentives and subsidies. But that was only seed money. The real dollars are the trillions providers spent and will spend on the technology and the implementation process.

Trillions of dollars, in other words, ride on this mandate to “promote the adoption and meaningful use of health information technology.”

Soumerai and Koppel continue:

Much of the economic justification for the spending on HIT was based on a now-debunked RAND study that promised up to a $100 billion in annual savings. Recently, however, in a remarkable act of ethics and honesty, RAND disclosed its previous study’s problems, dubious data, and weak research design, and that the research was subsidized by two of the larger HIT vendors (Cerner and GE).

But . . .

The Congressional Budget Office (CBO) and the Office of the National Coordinator for Health Information Technology (ONC), both of which touted the first RAND study, have not issued reassessments of their happy predictions but continue to promote HIT’s cost savings and improved patient safety. While HIT should be and absolutely is far better than paper records, more than 30,000 studies had already failed to support such bold assertions of powerful improvements in health and efficiency. Moreover, the research designs of all but a tiny proportion of those studies were too weak to yield trustworthy conclusions. And the best of them showed few if any benefits. This comes to the heart of our concern here: the use of weak research in support of less-than-effective health policies and medical treatments.

And there’s more:

Implementation of other federal policies with questionable economic incentives and penalties has also not lived up to expectations. These policies include: paying physicians extra income for things they were already doing (e.g., taking blood pressure), setting up as-yet-to-be-proven-effective Accountable Care Organizations to incentivize cost savings, or charging patients with high cholesterol thousands of dollars more for their health insurance premiums through Affordable Care Act-sanctioned wellness programs that do not improve chronic illness.

Soumerai and Koppel summarize:

The common denominator here? Absent or untrustworthy evidence of treatment and policy benefits, ignorance of failures, and possibilities of patient harm. Also, the crude application of economic incentives to change doctor and patient behaviors can backfire (e.g., changing diagnostic codes to maximize revenue or avoiding care for sick and expensive patients).

I see similar problems in the education world. Even without considering the possibilities of corruption, we all want to hear good news, whether it be the effectiveness of the latest cost-saving teaching innovation or the claim, based on a study of 130 kids in Jamaica, that early childhood intervention can raise children’s future earnings by 42 percent.

Soumerai and Koppel point to a recent article for the CDC by Soumerai, Douglas Starr and Sumit Majumdar, “How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed,” which demonstrates principles of study design with five examples. Here’s one:

The claim by the Institute for Healthcare Improvement (IHI) that their national hospital safety program, “the 100,000 Lives Campaign,” saved over 120,000 lives. This claim was based on trends in mortality already occurring before the campaign started. That is, the claim was based on a weak design that couldn’t control for prior events such as increasing use of life-saving drugs.

We debunked the exaggerated finding by tracking 12 years of hospital mortality data before the campaign started, where we found no change in the already declining mortality trend. Yet the widespread policy and media reports led to several European countries adopting this “successful” and expensive model of patient safety.

That prospect is scary: Policy based on bad data gets implemented even after the debunking.

Of course, I’m getting this from just one source myself. Policymakers should be making their decisions based on all available evidence, not just one source. The point, though, is that it is all too common for a published research claim to be taken as true. The consequences can be huge.