Everyone has heard about the placebo effect -- the bizarre fact that a dummy medicine can make a sick person feel better than nothing at all. But a study published in the Annals of Internal Medicine on Monday found that not all shams are the same: How a fake medicine is administered can determine how well it works.
Researchers reviewed more than 100 clinical trials for knee osteoarthritis, focusing not on the drugs, treatments and injections that were the original point of the studies, but on the patients that got a fake treatment for their pain. To their surprise, the research team found that a sham injection with saline solution was not only better than a fake pill; it was also 1.6 times better at relieving pain than an actual drug -- Tylenol. And there was a hierarchy of fakes, too: Fake topical cream for pain relief worked better than a sugar pill.
Understanding the magnitude of the placebo effect and the factors that cause it to be strong has ballooned from a quirky side branch of medicine into an area of increasingly rigorous study. Research has found that a costlier fake treatment can work better than a cheap one, that patients taking a real drug labeled as a placebo do as well as people taking a placebo labeled as a drug, and that even knowing the medicine is fake doesn't erase the effect.
[Study shows an expensive, fake Parkinson's disease treatment was more effective than a cheap, fake treatment]
The mounting data has sparked a debate in medicine on whether -- and how -- the placebo effect can be utilized in treating patients. After all, ethical issues can arise if a physician lies to a patient and claims to be administering a drug that's nothing more than sugar.
But the new study brings into focus a different and less well-known part of the placebo debate: the complexities it can introduce when trying to figure out how one treatment compares with another. Comparative effectiveness research is intended to guide physicians to the best treatments with the least side effects. That strain of research has been expected to make medical decision-making more rational and help doctors confront an ever-growing array of me-too treatments that may provide them with multiple options to treat a single disease.
But Raveendhara Bannuru, the director of the Center for Treatment Comparison and Integrative Analysis at Tufts Medical Center and the leader of the study, says that often times, doctors comparing treatments simply won't actually be able to clearly tell how two treatments compare with one another, given how the trials are currently done. Take this example: A physician is considering a pill or an injection for a patient's pain. He sees one study that shows a pill was far more effective than a placebo and another that shows an injection was only slightly more effective than the sham injection. The pill might appear to have the larger effect, but Bannuru's research shows that is not comparing apples to apples.
That's because injections or creams appear to have stronger placebo effects than pills.
"We need to definitely take that into account while designing future randomized trials," Bannuru says.
An accompanying editorial in the journal calls this 'an efficacy paradox.' A treatment that seems to be only marginally better than placebo might actually be much better than another treatment, if the placebo effect itself varies. This study looked at one condition, but the placebo effect has been documented in a range of diseases and conditions, from migraines to Parkinson's.
It's yet another challenge for medicine to resolve to help doctors make the most informed choices.