MEDICAL HISTORIES – The final instalment in our short series discusses the evolution of evidence-based medicine.
Like bleeding, doctors' intuition was a central part of medical practice until it was categorically proven not to offer patients the best outcomes. This led to the birth of evidence-based medicine. But it was a painful and protracted birth that tells us much about the nature of medicine and the identity of medical practitioners.
Medical practitioners used the therapy of venesection (bleeding) for centuries. It was a crucial element of the medical armamentarium from the ancient Greeks to the nineteenth century. Surveying images of venesection, like Gilray’s “breathing a vein”, we see how connected it was to the identity of the doctor.
To be a doctor was to bleed patients. As a therapy it was beyond question, so tightly was it bound to medical identity and practice.
Then, from the early 1820s, the French physician Pierre Louis started questioning the dogma of venesection. He used a new method designed to show whether the therapy worked or not – using simple statistics, he compared the clinical outcomes for patients treated with bleeding and those treated without it.
In every single condition he explored, whether it was pneumonia or phthisis (tuberculosis of the lungs), Louis demonstrated that bleeding, that most medical of therapies, was actually harmful to patients. Louis, then, has a claim to the title “father of the clinical trial”. His experiments are the ground zero of what would, in the hands of epidemiologists like Richard Doll (1912-2005), become the randomised controlled trial (RCTs).
It is, perhaps, surprising that Louis’ numerical method took so long to catch on. But the clinical judgement of individual consultants was ingrained in medicine’s culture as strongly as venesection was a few generations back. In certain elite institutions, like the London teaching hospitals, intuition was what one clinician called “the incommunicable knowledge” of the art of medicine. It was valued more highly than the application of science or technology to practice.
Writing towards the end of his life Doll reminisced about this older clinical style:
When I qualified in medicine in 1937, new treatments were almost always introduced on the grounds that in the hands of professor A or in the hands of a consultant at one of the leading teaching hospitals, the results in a small series of patients (seldom more than 50) had been superior to those recorded by professor B (or some other consultant) or by the same investigator previously.
Doll wanted to change this by bringing statistical methods into the clinic. Using rigorous statistical methods, he became one of the first scientists to identify the link between smoking and lung cancer in 1950. He also designed and ran one of first RCTs proper, when, with his long-time collaborator Austin Bradford Hill, he explored the treatment of tuberculosis with the antibiotic streptomycin.
All of the most important features of the RCT featured in this trial. They were the random assignment of patients to the drug group and control groups; the exclusion of unsuitable patients from the trial; bias reduction by masking the identity of the groups to which each patient belonged; and the consideration of ethical issues surrounding the withholding of a potentially effective treatment from patients with a fatal disease.
The trial was an outstanding success, demonstrating this antibiotic could treat the white plague TB. In principal, it provided medicine with a new gold standard of testing therapeutic efficacy, and the foundation upon which a truly scientific biomedicine might be built.
But it took time and several disasters before RCTs became the only recognised means of testing new drugs and bringing them to market. It wasn’t until the 1960s that RCTs became the main means of testing drugs. In the interim lay the spectacular disaster of thalidomide to remind the world of what happens when ad hoc testing is combined with an absence of ethics.
Grunenthal, a German pharmaceutical company, was searching for a synthetic antibiotic. A molecule that had promised much was a dud but appeared to have sedative properties. To test this, the researchers resorted to a jiggle test, comparing the amount a dangling cage swayed when filled with dosed or un-dosed rats. The cage jiggled less with the dosed rats, which, they believed, demonstrated the molecule’s sedative properties.
Thalidomide was brought to market with no adequate testing, and Grunenthal gave German GPs samples to distribute willy-nilly to their patients. The drug was licensed with little or no evidence in many countries including the United Kingdom. Only in the United States did the diligence of Frances Kelsey, then-reviewer for the US Food and Drug Administration (FDA), prevent the licensing of the drug. We are still living with the results.
From the early 1970s, increasing efforts were made to develop a reliable evidence base for medical therapies, bringing science into a clinical setting. Even so, there was considerable resistance from clinicians. As US epidemiologist Alvan Feinstein commented in 1983,
the most vigorous defenders of the clinical art may want not only to resist further attempts at bringing science into clinical medicine, but also to roll back the clock … to an earlier era of clinical practice that relied upon intuition individual judgement rather epidemiological analysis.
It is unsurprising in the face of this resistance that it was not until 1992 that the manifesto for evidence-based medicine announced the emergence of “a new paradigm for medical practice”. This new paradigm combined systematic reviews and meta-analysis to ensure that medical education and therapy were as effective as they could be.
With the widespread adoption of evidence-based medicine, the older clinical style has finally withered away. But rather like venesection, it hung around long past its sell-by date because of its powerful attraction to the identity of clinicians.
This is the final part of Medical Histories – click on the links below to read other articles: