Menu Close

Is psychology really in crisis?

I just can’t seem to get my replication studies published. Photographee.eu/Shutterstock.com

Modern psychology is apparently in crisis. This claim is nothing new. From phrenology to psychoanalysis, psychology has traditionally had an uneasy scientific status. Indeed, the philosopher of science, Karl Popper, viewed Freud’s theories as a typical example of pseudoscience because no test could ever show them to be false. More recently, psychology has feasted on a banquet of extraordinary findings whose scientific credibility has also been questioned.

Some of these extraordinary findings include Daryl Bem’s experiments, published in 2011, that seem to show future events influence the past. Bem, an emeritus professor at Cornell University, revealed that people are more likely to remember a list of words if they practise them after a recall test, compared with practising them before the test. In another study, he showed that people are significantly better than chance at selecting which of two curtains hide a pornographic image.

Then there’s Yale’s John Bargh who in 1996 reported that, when unconsciously primed with an “elderly stereotype” (by unscrambling jumbled sentences containing words such as “Florida” and “bingo”), people subsequently walk more slowly. Add to this Roy Baumeister who in 1998 presented evidence suggesting we have a finite store of will-power which is sapped whenever we resist temptations such as eating chocolates. Or, in the same year, Ap Dijksterhuis and Ad Van Knippenberg showing that performance on Trivial Pursuit is better after people list typical characteristics of a professor rather than those of a football hooligan.

Does thinking about him really make you better at Trivial Pursuit? Dragon Images/Shutterstock.com

These studies are among the most controversial in psychology. Not least because other researchers have had difficulty replicating the experiments. These types of studies raise concerns about the methods psychologists use, but also more broadly about psychology itself.

Do not repeat

A survey of 1,500 scientists published in Nature last month indicated that 24% of them said they had published a successful replication and 13% published an unsuccessful replication. Contrast this with over a century of psychology publications, where just 1% of papers attempted to replicate past findings.

Editors and reviewers have been complicit in a systemic bias that has resulted in high-profile psychology journals becoming storehouses for the strange. Many psychologists are obsessed with the “impact factors” of journals (as are the journals) – and one way to increase impact is to publish curios. Certain high-impact journals have a reputation of publishing curios that never get replicated but which attract lots of attention for the author and journal. By contrast, confirming the findings of others through replication is unattractive, rare and relegated to less prestigious journals.

Despite psychology’s historical abandonment of replication, is the tide turning? This year, a crowd-sourced initiative – the OSC Reproducibility project – attempted to replicate 100 published findings in psychology. The multinational collaborators replicated just over a third (36%) of the studies. Does this mean that psychological findings are unreliable?

Replication projects are selective, targeting studies that are cheaper and less technically complicated to replicate or those that are simply unbelievable. Other projects such as “Many Labs” have reported a replication rate of 77%. All initiatives are non-random and headline replication rates reflect the studies that are sampled. Even if a random sample of studies were examined, we don’t know what would constitute an acceptable replication rate in psychology. This is not an issue specific to psychology. As John Ioannidis noted: “most published research findings are false”“. After all, scientific hypotheses are our current best guesses about phenomena, not a simple accumulation of truths.

Questionable research practices

The frustration of many psychologists is palpable because it seems so easy to publish evidence consistent with almost any hypothesis. A likely cause of both unusual findings and non-replicability is psychologists indulging in questionable research practices (QRPs).

In 2012, a survey of 2,000 American psychologists found that most indulged in QRPs. Some 67% admitted selectively reporting studies that "worked”, while 74% failed to report all measures they had used. The survey also found that 71% continued to collect data until a significant result was obtained and 54% reported unexpected findings as if they were expected. And 58% excluded data after analyses. Astonishingly, more than one-third admitted they had doubts about the integrity of their own research on at least one occasion and 1.7% admitted to having faked their data.

The problems associated with modern psychology are longstanding and cultural, with researchers, reviewers, editors, journals and news-media all prioritising and benefiting from the quest for novelty. This systemic bias, coupled with minimal agreement on fundamental principles in certain areas of psychology, means questionable research practices can flourish – consciously or unconsciously. Large-scale replication projects will not address the cultural problems and may even exacerbate them by presenting replication as something special that we use to target the unbelievable. Replication – whether judged as failed or successful – is a fundamental aspect of normal science and needs to be both more common and more valued by psychologists and psychology journals.

Want to write?

Write an article and join a growing community of more than 182,300 academics and researchers from 4,941 institutions.

Register now