Without evidence-based healthcare, medicine is not much better than folklore.
In the bad old days, clinical decisions were based largely on the experience and wisdom of doctors and other healthcare professionals, but treatments given in this manner sometimes did more harm than good. Evidence-based healthcare (EBHC) on the other hand uses population data to figure out the best treatments for different illnesses.
But although population data signal towards some regularity in large groups of people, they may only be just that – a signal – and not something that is evidence of clinical effectiveness in individual patients.
EBHC explicitly trusts knowledge produced by some research methods more than it does others, and it is this knowledge that doctors should use when making decisions about which treatment is best for their patient.
The favoured methods are wide-scale population studies which can track the effects of a treatment over time in large numbers of people. Ideally, these methods should compare the treatment against an alternative, or a placebo (methods such as randomised controlled trials do this). These methods help reduce the inherent biases of human judgement.
Paradoxically, however, it could be the very emphasis which is placed on these methods which has created barriers to EBHC’s own core concern – to find the best evidence for patient care.
How it lost its way
Although population studies have been helping doctors make treatment decisions for decades, there are still many areas where disease is increasing, for example back pain and arthritis. This might be because we need to do better studies. However, the truthfulness of studies is not necessarily related to how good they are.
There is, of course, some irony in using scientific methods to judge the value of scientific methods. We may need other ways to think about the issues here. So, a little philosophy: the theoretical basis of population studies boils down to a specific idea of causation (causation is, after all, what this whole thing is about – working out what treatment causes what effect). This idea is famously linked to the 18th-century Scottish philosopher David Hume.
Hume said that causation was nothing more than beliefs brought about by continually observing similar responses between two events (cause and effect). This is essentially what all population studies do. He also said that if the cause was not there, then neither would the effect. This is essentially what randomised controlled trials, specifically, try to establish.
Hume, however, was troubled by the thought that causation must be more than this because his theory did not include anything about the causal matter itself – the real-world, complex, messy stuff which exists in individual situations rather than at population-level and which glues the cause and effect together. Only the observed outcomes are considered. Knowing outcomes might be all we need, but with the methods valued by EBHC, the outcomes relate to groups, not individuals.
So could EBHC be leading us to an incomplete picture of what works for an individual? Some people think that this might be the case, and even suggest that the movement is in crisis. Perhaps now is the perfect time to re-evaluate what best evidence for clinical effectiveness is, while considering the specific needs and context of each individual patient.
Can EBHC find its way again?
If Hume’s worries are right, and there is more to the story of causation than just regularly observed events, then we are indeed able to move forwards. Healthcare is undoubtedly a complex world. In complexity, the behaviour of things become difficult to predict and are highly context sensitive. The growing gap between scientific research and real-world complexity has been highlighted before.
Population data is not infallible, and it could very well be that something which appears to be working for the whole group is not actually effective for individuals. Though this kind of data may offer a probability of outcome, it may not tell of other causal factors which will influence the outcome in any particular case.
Assuming then that the effectiveness of an intervention is context-sensitive, it should have a different response in each different situation. For example, exercise might be recommended for low back pain, but its effectiveness will be influenced by the patient’s fitness level, fear of movement, anxiety, sleep pattern, understanding of the exercise and so forth. So the individual context will influence the effectiveness of the treatment. Population studies average a response out. This gives us good data on the population, but it says little about you.
What EBHC now needs is a revision of its systematic and scientific methods. Rather than controlling for the complexity of the real world, these methods should serve to embrace it. My latest paper offers strategies for how this might be done, such as identifying data patterns from a range of research methods signalling the variation and complexity of causation, rather, and expanding research partnerships across disciplines to capture and represent the context and complexity of health.
Although elements of these strategies might already be in place, they are so with the prioritising of certain methods over others. This restricts our understanding of causation.
Maybe we have been looking down the EBHC telescope the wrong way, trying to understand the individual by studying the population. If we turn it around, we might progress from knowing what works, to what works for you.