Menu Close

Scientists are only human but statins error shows perils of bias

Working on an exit strategy. Topyti, CC BY-NC-SA

The recent retraction of an academic claim in a leading journal about the incidence of side effects from cholesterol-lowering drugs has sparked anger in the medical community and potentially undermined public and patient trust.

John Abramson, a lecturer at Harvard Medical School, and colleagues claimed 18% of patients had discontinued the drugs, known as statins, due to side effects, but this was a misinterpretation of figures in another study published by the BMJ.

The error prompted a correction and an editorial by Fiona Godlee, the BMJ’s editor-in-chief, explaining that she was setting up an independent panel to consider a retraction of the full article. It was a commendable move, but the debacle clearly raises the question about what happens when mistakes are made by researchers.

Using statins in low-risk patients

d f o. AJC1, CC BY-NC-SA

The family of cholesterol lowering drugs known as statins is among the most widely prescribed for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks in people with cardiovascular disease. A more contentious issue is their use for individuals who have no history of heart attacks, strokes or blockages in their blood vessels.

If statins were free of charge and had no side effects, the answer would be rather straightforward: go ahead and use them as soon as possible. However, like all medications, statins come at a price – both in terms of cost and potential side effects. Guidance from official bodies in the US recommends that the preventive use of statins in individuals without known cardiovascular disease be based on personalised risk calculations; if the risk of developing disease within the next 10 years is greater than 7.5% then the benefits outweigh side effects. If it is greater than 5%, then physicians should still consider prescribing statins on the proviso that the scientific evidence is less strong.

The guidance was met with scepticism by some medical experts. In October 2013, the BMJ published the paper by Abramson and colleagues, which re-evaluated data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. They concluded that the benefits were over-stated and that statin therapy shouldn’t be expanded. To further bolster their case, they also cited a 2013 study which they said showed 18% of patients had discontinued statins due to side effects.

The problem was that they ignored all the caveats that the authors of that study made, and as it was a retrospective review of patients charts it didn’t establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects. According to the 2013 study, 17.4% of patients reported a “statin related incident” and of those only 59% stopped the medication. So at the most, only 9-10% of patients could be said to have discontinued statins due to suspected side effects – not the 18% cited by Abramson.

The 2013 study also didn’t include a placebo control group, but in trials with placebo groups similar rates of “side effects” have been documented in those taking statins and those taking placebos. This would suggest only a small minority of perceived side effects are truly caused by statins.

Whether the figure is 18%, 9%, or less is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue a medication that has been shown to significantly reduce the risk of heart attacks in a wide range of patients.

Reviewing peer review

Every retraction of a peer-reviewed scholarly paper is something of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws.

We can only speculate as to why it happened here before full consideration of the process by the independent panel but one has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments.

For most journals, peer review usually consists of two to four volunteer experts who routinely spend multiple hours analysing experimental design, methods, presentation of results and conclusions of a scholarly manuscript. They operate under the assumption that the manuscript’s authors are professional and honest in terms of how they present the data and describe their scientific methodology.

The BMJ’s correction also refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all of these and ensure that they are being properly interpreted. If this were the expectation, few peer reviewers would volunteer. However, in this particular case, most reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18%.

To err is human, to study errors is science

All researchers make mistakes – because they are human. It is impossible to eliminate all errors in any human endeavour but we can construct safeguards to reduce their occurrence and magnitude.

Overt fraud and misconduct are rare, but their effects can be devastating. One of the most notorious examples was Dutch psychologist Diederik Stapel who fabricated non-existent study subjects and published numerous papers based on “data” from them. The field of cell therapy in cardiovascular disease experienced a major setback when an investigation found evidence of scientific misconduct in papers by German cardiologist Bodo Strauer. The significant discrepancies and irregularities in Strauer’s studies have now lead to wide-ranging scepticism about the efficacy of using bone marrow cells to treat heart disease.

However, a far more likely source of errors in research is cognitive bias. Researchers who believe in certain hypotheses or ideas can be more prone to interpreting data in a manner most likely to support their preconceived notions. In the case of Abramson, his opposition to statins use might have led him to interpret the data on side effects differently to someone who supported statins use.

Work by Piero Anversa, one of the world’s most widely cited stem cell researchers, was retracted after an investigation found his group’s reporting that the adult human heart replaced its entire collective of beating heart cells every eight to ten years was significantly compromised. A prior study found only a minimal turnover of 1% or less per year, and cardiologists don’t routinely observe near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the sharp contrast was that Anversa hadn’t taken into account the possibility of contaminations that could have falsely elevated cell counts.

High quality science is characterised by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors. There have been calls for better ways to track reproducibility and overhaul the peer review system in areas such as psychology, stem cell research and cancer biology.

One emerging idea is post-publication peer evaluation, which would invite scientists to continue commenting on the quality and accuracy of a paper after its publication and to engage authors in this process. As the head of a research group I have to take mandatory courses (in some cases annually) in areas such as lab hazards and ethics, but there’s an underlying assumption that if you are no longer a trainee you probably know how to perform experiments. It wouldn’t hurt to remind scientists (regularly) that we can all become victims of our own bias and that we must continuously re-evaluate how we conduct science and be humble enough to listen to colleagues, especially when they disagree with us.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now