Menu Close

How a study about Chronic Fatigue Syndrome was doctored, adding to pain and stigma

Dr. Ellen Wright Clayton, who has worked with those who have Chronic Fatigue Syndrome, spoke to an open committee at the Institute of Medicine in February 2015 about the biomedical nature of CFS. Susan Walsh/AP

The public relies on scientists to report their findings accurately and completely, but that does not always happen. Too often, researchers announce only their most favorable outcomes, while keeping more disappointing results well out of sight.

This phenomenon, first identified by the psychologist Robert Rosenthal in 1979, is called the “file drawer problem.” Although it is widely recognized – affecting drug trials, psychology experiments and most other fields – it has seldom been documented, for obvious reasons. Suppressed results are, well, suppressed, and they are usually discovered only by chance.

It was therefore almost unprecedented when a group of patients, at the end of last year, successfully unmasked the skewed data behind an influential British study, first published in Lancet in 2011, of the devastating disease known as Chronic Fatigue Syndrome (sometimes called myalgic encephalomyelitis or ME/CFS).

My interest in this issue is both professional and personal. As a law professor, I have devoted much of my career to the study of judicial ethics, including the problem of implicit biases that can undermine the reliability of both court trials and clinical trials.

I have also been living with ME/CFS for over a decade, so I am acutely attuned to the need for responsible and transparent research on the illness. Unfortunately, the most extensive study of ME/CFS – called the PACE trial – was deeply flawed from its inception, in ways that the principal investigators have yet to acknowledge.

‘Dysfunctional’ beliefs all too real for those in pain

The story began in 2005, when a group of psychiatrists set out to test their theory that ME/CFS is primarily a psychosocial illness, characterized by patients’ “unhelpful cognitions” and their “dysfunctional” beliefs that their symptoms are caused by an organic disease.

Under this assumption, they recruited over 600 ME/CFS patients for the PACE trial and randomly divided them into four categories. One group was treated with cognitive behavior therapy (CBT), a form of psychotherapy that addresses patients’ “false perceptions” of their illness, and a second group received graded exercise therapy (GET), which consisted of supervised increases in their activity levels. The other two groups were essentially controls, receiving neither of the treatments under study.

In a 2013 article in Psychological Medicine, the PACE team announced its most striking results. This follow-up article claimed that the therapy arms of the study – CBT and GET – had achieved impressive 22 percent recovery rates – not just improvement rates – as opposed to only seven or eight percent in the control arms.

The result was enthusiastically promoted in the press, but many patients were suspicious, especially of the GET outcomes, which contradicted their experience of debilitating crashes following the simple movements of daily life.

ME/CFS patients have consistently explained that exertion exacerbates their worst symptoms. For many, even moderate exercise can result in a days-long crash, in which they are nearly immobilized by muscle weakness and joint pain. In the U.S., post-exertional relapse has been recognized as the defining characteristic of the illness by the Centers for Disease Control, the National Institutes of Health and the Institute of Medicine.

For the PACE investigators, however, the announced recovery results validated their conviction that psychotherapy and exercise provided the key to reversing ME/CFS.

There was just one problem. A subsequent investigation found that the PACE investigators had changed the standard for recovery midstream, weakening one of the key criteria to the point that a subject could actually have gotten worse in the course of the trial and yet still count as “recovered” following supervised GET.

Unraveling the mystery

Here is how it worked, as shown by the investigation: At the outset of the trial, patients were recruited who scored at 65 or lower on a measure called the physical function score, and recovery was defined as achieving a subsequent score of 85 or higher, which indicates a relatively healthy person.

Before the unblinded trial was completed, however, the definition of recovery was reduced to a score of 60, which was below the level that qualified research subjects in the first place.

It was the change in this outcome measure (and several others) that allowed the PACE researchers to declare their favorable outcome for GET. The unimpressive results under the original protocol went unpublished, as though they had been stuck in a a figurative file drawer.

When the Psychological Medicine article was published in 2013, members of the patient community immediately pointed out the discrepancy. Because the study had been publicly funded, they sought the underlying data under the U.K.’s Freedom of Information law. The PACE investigators refused to release any of the raw results.

In October 2015, David Tuller of the University of California at Berkeley published a lengthy expose of the PACE trial, pointing out the jiggered outcome measure, as detailed above, and many other flaws. His report attracted the attention of numerous American scientists who joined an open letter seeking an independent review of the PACE data.

Finally, in summer 2016, a British Freedom of Information tribunal ordered the PACE team to unlock the file drawer and disclose their raw data. A revelation followed.

. From www.shutterstock.com

Exaggerated recovery claims

A group of patients and scholars reanalyzed the PACE data according to the original determinants and, as suspected, the “recoveries” under CBT and GET all but disappeared. As they reported last December in a peer-reviewed medical journal, the recovery rate for CBT fell to seven percent and the rate for GET fell to four percent, which were statistically indistinguishable from the three percent rate for the untreated controls.

Thus, the PACE investigators proved nothing more than a familiar adage among statisticians: If you torture the data, they will confess anything.

Researchers in the U.S. and Australia have recently made great progress toward identifying biomarkers for ME/CFS, which may lead to an effective medical intervention. Over 100 prominent researchers, clinicians and organizations have called on Psychological Medicine to retract the PACE article, although the journal has not yet publicly responded.

Thanks to the original PACE announcement, however, graded exercise is still routinely prescribed throughout the U.S. and the U.K. despite reports that the treatments can cause intolerable pain and relapse. Those who question GET are often told that they must simply exercise more, no matter how badly they crash afterward.

It is bad enough to torture the data, but it is indefensible to torture patients based on manipulated results.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now