We work at the Australasian Cochrane Centre and we dread being asked what we do for a living. This isn’t because we don’t like what we do, in fact we love it. It’s because when we explain that our job is about helping clinicians and policy makers to use research about what works to inform their treatment or policy decisions, we often receive blank stares or a slow, confused “You mean they don’t do that already?”.
Ideally they do but, for lots of reasons, this is not always the case. One of the challenges is knowing how to find out exactly what works, and one of the most powerful ways of doing this is by looking at systematic reviews.
Put simply, a systematic review is a reliable summary of all the research that’s available on a specific question, such as do antibiotics reduce the duration of symptoms of sore throat?
The systematic nature of these reviews is key. A good systematic review will describe the lengths the authors went to identify relevant studies; the criteria for including studies; how the quality of the studies was assessed; and how the results of the studies were combined.
The principle is that the methods followed should be transparent and replicable, minimising the extent to which authors can influence or bias the review, for example, by leaving out studies whose results they disagree with. In theory at least, this means that another team, following the same methods, would end up with very similar results.
Finding the entire body of research on a particular question is important because in any given area the results of some studies will agree and others won’t. You might search the internet and find a study that shows promising results for an eczema cream, but without reviewing all the studies, you have no way of knowing with any certainty if the results of this one study are consistent with the results of other studies of the cream.
Knowing when, how and why individual studies agree or disagree helps interpret and apply the evidence in practice.
Systematic reviews perfectly encapsulate the adage that the whole is greater than the sum of its parts. Small studies with inconclusive results are common but when looked at together, and combined using a neat statistical technique called meta-analysis, we can often find a conclusive overall answer.
The logo of The Cochrane Collaboration is an example of a meta-analysis.
One criticism of systematic reviews is their over-reliance on randomised trials, the gold standard approach to assessing the effects of interventions. This is changing rapidly and there’s growing awareness of the importance of other study designs in providing a more complete picture of the evidence.
Systematic reviews undertaken by The Cochrane Collaboration, the world’s largest producer of such reviews, now include a range of study designs appropriate to the question being asked:
qualitative research to explain, for example, why a particular treatment might work in some populations but not others;
economic analyses to address issues of cost and cost-benefit; and
non-randomised studies where it would be unethical or impractical to randomise.
Methodological advances have led to systematic reviews being much more sophisticated and useful summaries of evidence but there are still plenty of challenges. Finding all the research on a subject is a case in point. We have a good handle on published research, helped in recent times by the advent of prospective trials registers, but are only now beginning to appreciate the potential implications of the mountains of unpublished data.
Even when we know studies have been done it can be hard to get access to unpublished results. The authors of the recent Cochrane review on Tamiflu, for instance, had to comb through reams of unpublished data submitted to regulatory authorities to unearth possible under-reporting of side effects.
We know too that studies with unexciting results (showing that a treatment doesn’t work) are less likely to be published than studies with exciting results. This can lead to published literature painting a rosier picture than would otherwise be the case, so it’s important that authors assess and report the likelihood of publication bias in their review.
Other challenges include making systematic reviews more readable and relevant to the individual consumer, practitioner or policy maker. While the technical nature of these reviews are a strength, we know they can be challenging to the reader.
The Cochrane Collaboration summarises all its reviews in plain language and is continually considering how to present information from reviews in friendly, easily interpretable formats.
Systematic reviews are essential for making sense of research and helping consumers, practitioners and policy makers identify what works or doesn’t work. They also have a vital role in identifying uncertainties and priorities for future research. If only we could explain this in a sentence or two over a glass of wine.