Sometimes, the most important steps in medical research are the ones we ignore.
In April, a major study was published online by the New England Journal of Medicine. Aliskiren, a relatively new medication, was added to enalapril, one of the most common blood pressure drugs in Australia. The objective was to improve outcomes for heart failure patients. There were 7,000 participants. This was a big, relevant trial. So why didn’t you hear about it?
Well, the drug didn’t work. But that doesn’t mean the study wasn’t a success – it was.
The patients on combination treatment had more toxicity: dangerously low blood pressure, kidney dysfunction and high potassium levels. They also died just as often, which understandably came as a surprise to those who designed the trial.
It would have come as even more of a shock to Novartis, the company that spent a lot of money developing aliskiren. But surprise is good; that is the point of a well-designed clinical trial. Regardless of the outcome, studies like this are the core reason you can generally trust your doctor when they recommend a treatment to you.
Trials with a negative result deserve more attention. Their existence should guide rational health care as much those with a positive end.
The aliskiren-enalapril study was a randomised controlled trial. Randomised because the participants were randomly assigned their treatment group. Controlled because they received either the experimental drug or a placebo. Nobody who met them, not even their treating clinicians, could be sure which group participants were in, or what they were taking.
Randomising patients means the only difference between the groups is the experimental drug. So, from the results, you can confidently say whether or not the treatment was effective.
A while ago, drug companies could bury these studies if they came up with a bad result. Only positive trials were submitted for publication, leading to significant bias about the potential benefit of new medications.
That has changed over the past couple of decades. Companies now have to register major studies before they start. It is the design of the trial that determines whether it should be published, not the results.
Unfortunately, the wider media often ignore so-called negative trials, where results won’t alter anyone’s practice. In some ways this is understandable. Who wants to talk about something that supports the status quo? But these results are just as important as the “game-changers”.
In the absence of a well-designed trial, the so called game-changers and breakthroughs may be anything but. Many are reported in their early phases of development. These drugs might be effective, or they might not. The only way to find out is to test them properly in a randomised controlled trial.
Almost all treatments should be subject to this level of scrutiny, be they conventional or alternative. When interventions have only gone through a weak level of analysis – or, worse still, are peddled based on opinion alone – this should be highlighted by those who are arguing their case.
The strength of our entire health care system is based on guaranteeing a level of care that is supported by a body of evidence. Only if we have good reasons for funding the treatments available can both Medicare and hospitals be justifiably sustained.
Take a look at some of those treatments our taxes pay for: Medicare rebates are available for acupuncture, despite multiple large reviews suggesting no benefit greater than placebo. The cost of investigating and managing non-specific lower back pain grows every year, despite a lack of evidence for invasive interventions.
High-dose intravenous vitamin C is often touted as a single-agent cancer therapy, yet the only randomised controlled trials ever performed for this line of treatment failed to show a difference above placebo.
In 2012, an Australian study identified 156 ineffective or unsafe services that were still being federally funded. This indicates a simple truth: we are ignoring negative trials at our peril.
More focus should be placed on the existence of such studies. When well-designed, they answer vital questions about how to direct health funding.
While the outcomes may be uncomfortable for the developers of treatments, this is irrelevant to the millions of patients worldwide who participate in negative trials. They put their bodies on the line for the sake of these questions. We should at least pay attention to the answers.