Menu Close

When ‘exciting’ trumps ‘honest’, traditional academic journals encourage bad science

One more corner, then I’ll answer your questions. campuspartymexico, CC BY

Imagine you’re a scientist. You’re interested in testing the hypothesis that playing violent video games makes people more likely to be violent in real life. This is a straightforward theory, but there are still many, many different ways you could test it. First you have to decide which games count as “violent”. Does Super Mario Brothers count because you kill Goombas? Or do you only count “realistic” games like Call of Duty? Next you have to decide how to measure violent behaviour. Real violence is rare and difficult to measure, so you’ll probably need to look at lower-level “aggressive” acts – but which ones?

Any scientific study in any domain from astronomy to biology to social science contains countless decisions like this, large and small. On a given project a scientist will probably end up trying many different permutations, generating masses and masses of data.

The problem is that in the final published paper – the only thing you or I ever get to read – you are likely to see only one result: the one the researchers were looking for. This is because, in my experience, scientists often leave complicating information out of published papers, especially if it conflicts with the overall message they are trying to get across.

In a large recent study around a third of scientists (33.7%) admitted to things like dropping data points based on a “gut feeling” or selectively reporting results that “worked” (that showed what their theories predicted). About 70% said they had seen their colleagues doing this. If this is what they are prepared to admit to a stranger researching the issue, the real numbers are probably much, much higher.

It is almost impossible to overstate how big a problem this is for science. It means that, looking at a given paper, you have almost no idea of how much the results genuinely reflect reality (hint: probably not much).

Pressure to be interesting

At this point, scientists probably sound pretty untrustworthy. But the scientists aren’t really the problem. The problem is the way science research is published. Specifically the pressure all scientists are under to be interesting.

This problem comes about because science, though mostly funded by taxpayers, is published in academic journals you have to pay to read. Like newspapers, these journals are run by private, for-profit companies. And, like newspapers, they want to publish the most interesting, attention-grabbing articles.

This is particularly true of the most prestigious journals like Science and Nature. What this means in practice is that journals don’t like to publish negative or mixed results – studies where you predicted you would find something but actually didn’t, or studies where you found a mix of conflicting results.

Let’s go back to our video game study. You have spent months conducting a rigorous investigation but, alas, the results didn’t quite turn out as your theory predicted. Ideally, this shouldn’t be a problem. If your methods were sound, your results are your results. Publish and be damned, right? But here’s the rub. The top journals won’t be interested in your boring negative results, and being published in these journals has a huge impact on your future career. What do you do?

If your results are unambiguously negative, there is not much you can do. Foreseeing long months of re-submissions to increasingly obscure journals, you consign your study to the file-drawer for a rainy day that will likely never come.

But if your results are less clear-cut? What if some of them suggest your theory was right, but some don’t? Again, you could struggle for months or years, scraping the bottom of the journal barrel to find someone to publish the whole lot.

Or you could “simplify”. After all, most of your results are in line with your predictions, so your theory is probably right. Why not leave those “aberrant” results out of the paper? There is probably a good reason why they turned out like that. Some anomaly. Nothing to do with your theory really.

Nowhere in this process do you feel like you are being deceptive. You just know what type of papers are easiest to publish, so you chip off the “boring” complications to achieve a clearer, more interesting picture. Sadly, the complications are probably closer to messy reality. The picture you publish, while clearer, is much more likely to be wrong.

Science is supposed to have a mechanism for correcting these sorts of errors. It is called replication, and it is one of the cornerstones of the scientific method. Someone else replicates what you did to see if they get the same results. Unfortunately, replication is another thing the science journals consider “boring” – so no one is doing it anymore. You can publish your tweaked and nudged and simplified results, safe in the knowledge that no-one will ever try exactly the same thing again and find something different.

This has enormous consequences for the state of science as a whole. When we ask “Is drug A effective for disease B?” or “Is policy X a good idea?”, we are looking at a body of evidence that is drastically incomplete. Crucially, it is missing a lot of studies that said “No, it isn’t”, and includes a lot of studies which should have said “Maybe yes, maybe no”, but actually just say “Yes”.

We are making huge, life-altering decisions on the basis of bad information. All because we have created a system which treats scientists like journalists; which tells them to give us what is interesting instead of what is true.

Publish more papers, even boring ones

This seems like a big, abstract, hard-to-fix problem. But we actually have a solution right in front of us. All we have to do is continue changing the scientific publishing model so it no longer has anything to do with “interest” and is more open to publishing everything, as long as the methodology is sound. Open Access journals like PLOS ONE already do this. They publish everything they receive that is methodologically sound, whether it is straightforward or messy, headline-grabbing or mind-numbingly boring.

Extending this model to every academic journal would, at a stroke, remove the single biggest incentive for scientists to hide inconvenient results. The main objection to this is that the resulting morass of published articles would be tough to sort through. But this is the internet age. We have become past masters at sorting through masses of crap to get to the good stuff – the internet itself would be unusable if we weren’t.

Want to write?

Write an article and join a growing community of more than 181,000 academics and researchers from 4,921 institutions.

Register now