Menu Close

How one NHS anaesthetist is fighting international medical research fraud

John Carlisle is a consultant anaesthetist at Torbay Hospital on England’s south coast. Unless you’ve been one of his patients, you’ve probably never heard of him. But he’s a researcher too, and he’s developed statistical methods to help spot signs of fraud in medical research.

There’s a public image of medical researchers as being trustworthy people, working hard to make us all healthier. Scientists and doctors come right at the top of an international survey on the trustworthiness of professions. I’d say that image is deserved. But there are cases of very questionable research practice, or even downright fraud.

People usually remember the discredited paper published in The Lancet in 1998, linking the MMR vaccine to autism. Because of the major problems with that research, its lead author, Andrew Wakefield, was struck off the medical register.

The Wakefield case is unusual, though. Most cases of research fraud are harder to spot and aren’t so widely publicised.


Read more: Autism and vaccines: more than half of people in Britain, France, Italy still think there may be a link


Raft of retractions

Carlisle and other anaesthetists got suspicious, more than a decade ago, about studies by a Japanese researcher, Yoshitaka Fujii. He published results from a series of randomised, controlled clinical trials (RCTs) investigating medicines to prevent nausea and vomiting in patients after surgery. Carlisle, and others, thought the data was too tidy to be true. He showed that it was extremely unlikely that some of the patterns in Fujii’s data had occurred by chance. Because of this and further investigation, Fujii lost his university job.

No less than 183 of his papers were retracted, that is, effectively “unpublished” by the journals concerned. That’s far more retractions than any other individual has had.

Since then, Carlisle has developed his methods further. In 2017, he produced an analysis of over 5,000 clinical trials. Most were published in journals in his own field, anaesthetics, but he also included two top-ranking American medical journals, the Journal of the American Medical Association (JAMA) and the New England Journal of Medicine (NEJM). He found suspect data in about 90 papers.

In some cases, there were innocent explanations. But there were several retractions. For instance, a major Spanish trial, investigating whether a Mediterranean diet could help prevent heart diseases and strokes, had to be retracted. The random allocation of people to different diets had, in some cases, been done wrongly. A revised trial report, omitting the wrongly randomised participants, appeared later.

Adopted by medical journals

Carlisle’s methods are now routinely used by at least two medical journals to screen reports of RCTs that are submitted for publication. They are Anaesthesia, in Carlisle’s own specialty, and the prestigious NEJM. Others may well follow.

Carlisle’s method does not definitely say whether a trial report is fraudulent. It’s a screening method that suggests that some trial reports need to be examined more thoroughly to check whether anything untoward is going on. There could, sometimes, be innocent explanations for the unusual patterns of data that the method detects.

Andrew Klein, the Cambridge-based anaesthetist who is editor-in-chief of the journal Anaesthesia, told me via email that the journal receives about 500 submissions of reports on RCTs each year. These are all checked using Carlisle’s method, and more than one in every 40 is detected as being potentially fraudulent. Not all of these will be fraudulent, but the journal asks to see the original patient data, checks it, and takes appropriate further action if necessary.


Read more: Retraction of scientific papers for fraud or bias is just the tip of the iceberg


Carlisle’s method builds on particular features of how randomised clinical trials are run. A simple RCT might compare how good two different drugs, A and B, are at curing a certain disease. Patients with the disease are divided into two groups. One group gets drug A, the other drug B. Then they are all followed up to see who is cured.

The key feature is that the division into groups is made at random. This is to ensure that the two groups of patients are similar, on average, in all respects. Then, if patients on drug A do better, one can be confident that this is because they took drug A rather than B, and not because of some other difference.

In publishing the trial results, the researchers must report on “baseline” comparisons between the two groups before they start the treatments. Carlisle’s method uses measures of discrepancies between the groups called p-values, and he then combines all the baseline p-values in a trial into a single measure.

P-values explained.

His method will unearth trials where the two groups appear to be too similar to be true, or too different to be true. Both of these indicate that, possibly, the data has been invented or interfered with.

Carlisle’s method does make some statistical assumptions that aren’t always appropriate. But it’s a fairly simple approach that suggests some trials deserve more scrutiny. It’s a valuable part of the efforts to stop fraud in medical research. The question of why a small number of researchers should commit these frauds is complicated. I don’t believe all fraud will ever be eliminated from clinical research, but that’s no reason not to be vigilant.

Want to write?

Write an article and join a growing community of more than 182,000 academics and researchers from 4,941 institutions.

Register now