Placebo controls are a gold standard against which new treatments are often measured. If a new treatment consistently proves to be better than a placebo and safe, it can be marketed, sold and prescribed. Otherwise, it can’t – or at least shouldn’t. The problem is that, as our latest study reveals, researchers don’t report what placebos contain. Different placebos have different effects, and the choice of what’s in a placebo can lead to mistaken inferences about a new treatment’s benefits or harms.
Here are a few examples. Olive oil was previously used as a placebo control for cholesterol-lowering drugs before it was discovered that olive oil has cholesterol-lowering properties of its own. It may explain the effect of these drugs in some trials, which was lower than expected.
In trials of oseltamivir (Tamiflu) the placebo contained dehydrocholic acid, presumably to mimic the bitter taste of the drug. But dehydrocholic acid can cause gastrointestinal upset, as can oseltamivir. The trial found an increased risk of nausea and vomiting in the oseltamivir group compared with the placebo group. But this was probably an underestimate of the true incidence of harm because the placebo contained an ingredient that can cause the same side effects as the actual drug.
Why don’t researchers report what’s in placebos? One problem is that people believe the placebo or sham intervention is “inert”. If they were inert, there would be no point in reporting what’s in them. In fact, placebos are not inert – no substance is.
Pink tablets have a greater stimulating effect (get the adrenaline pumping) than blue ones (except for Italian men, in whom blue tablets produce a stimulating effect, possibly because their national football team wears blue). Branded, expensive tablets have greater painkilling effects than cheap generic ones, possibly because they influence patients’ expectations. And two placebos are better than one.
Of course, not all placebos are pills. They also include minimally invasive surgery, acupuncture needles that don’t pierce acupuncture points, manipulations, and others. Some evidence suggests that injections are more effective than pills and sham surgery is the most powerful placebo of all. The mechanisms by which these different placebo/sham interventions work go beyond the expectation of clinical improvement.
Sometimes placebos are recognisably different from the active treatment they are meant to control for. A 2016 review found that 64% of placebo control interventions did not match the physical properties of the drug being tested. If patients can identify the placebo, then the trial is not “blinded”.
Unblinded patients who know they are receiving a mere placebo may have lower expectations about recovery. These lower expectations can then affect the trial outcome, especially when symptoms are subjective and susceptible to suggestion. This kind of thing is common in depression trials.
Patients who believe they’re taking the real drug, whether they are or not, may develop higher expectations about feeling better, activate the brain’s reward mechanism so that it produces more dopamine, then actually feel better. Meanwhile, the opposite happens to patients who are given a placebo. Patients who know they are taking a placebo may even have a “nocebo” effect, which is the effect of a negative expectation. Expectations have led to exaggerated drug effects in antidepressant trials.
The examples of placebo components leading to mistaken inferences about apparent benefits or harms of active treatments might be the exception, but we can’t be sure until we know what’s in placebos.
A 2010 systematic review found that between 8% and 27% of trials described the placebo or sham intervention. Since then, guidelines for reporting on placebos in clinical trials have been published and are recommended by top journals, such as the BMJ.
The “template for intervention description and replication” (TIDieR) checklist includes 12 items that researchers should report about the components of the new treatment, including what’s in them, who delivered them and how long the treatment lasted.
Unfortunately, these guidelines have barely improved how well placebo components are reported. Our latest study identified 94 placebo or sham-controlled trials published in top journals in 2018. None were completely reported according to TIDieR guidance, with most trials reporting only half of what we need to know about placebos. Within lesser journals, the reporting quality of placebo controls was worse, but not by much.
There are many reasons placebo or sham controls are not well reported. As mentioned above, it is mistakenly assumed that they are inert, and reporting what’s in something inert seems redundant. Using the same word “placebo” (or “sham”) to describe these interventions also makes it appear as though they are all the same, and again not worth describing. And journals have strict word limits which might squeeze out full descriptions of placebo or sham controls. However, online appendices are making the word count problem redundant.
Placebo-controlled trials are among the most trusted methods for determining whether new treatments are effective and safe. To be worthy of this trust, we need to know what the placebo or sham contains.