Menu Close

How do we solve science’s ‘credibility problem’?

‘This finding, like this stock image, is uncredible!’ Shutterstock

Science is considered a source of truth and the importance of its role in shaping modern society cannot be overstated. But in recent years science has entered a crisis of trust.

The results of many scientific experiments appear to be surprisingly hard to reproduce, while mistakes have highlighted flaws in the peer review system. This has hit scientific credibility and prompted researchers to create new measures in order to maintain the quality of academic research and its findings.

Credibility crisis

This is particularly relevant in the UK, whose government prides itself on science-driven policy making. Policies are often drawn from behavioural research, traditionally considered a “soft science”. The head of the UK’s behavioural insights team – the “nudge unit” – argues that these days research economists can “change the world for the better”. But social scientists have debated the reliability and reproducibility of some behavioural research, prompting some to wonder whether science-driven policy has its limitations – and whether over-reliance on it can even backfire.

So leading scientists have suggested a variety of proposals to change the way that science produces knowledge. These include promoting transparency concerning research designs, incentives for more experimental repetition and enforcing the submission of a full plan of the design and analysis prior to the actual study – known as pre-registration.

It is remarkable, however, that economists have so far been content to remain so silent on this credibility crisis. It is, after all, the science that specialises in the analysis of strategic behaviour and the provision of incentives to promote desirable outcomes.

Our research takes up this challenge and provides a first step in examining the theoretical effects of the proposed policies of increased transparency and monitoring on the reliability of scientific results.

Scientific steroids

Although the image of altruistic researchers working hard to discover the truth is strong in the minds of the general public, the actual process in which academic research is conducted is different. Economic theory models the various incentives of scientists, prominent among which is the desire of individuals to ascend the academic ladder.

We focus on proposals to impose transparency – which will stop researchers from committing the questionable practises which make scientific evidence difficult to interpret.

The main result of our model is that discouraging slight transgressions, such as failing to report important details of the analysis, will also reduce more severe questionable research practises such as outright data manipulation. This is because questionable research practises serve as the “steroids” of the scientific race, where the abundance of a given form of misconduct increases the incentives to engage in more extreme misconduct. Accordingly, a policy that eradicates mild forms of misconduct also discourages the use of stronger “performance enhancers”.

We examine a setting where researchers are motivated to conduct research ethically or to maintain a good reputation, but are also concerned about being published in a limited number of top journals. The latter is crucial, as it introduces an “economic externality”.

Easing the pressure

The likelihood of an individual researcher to commit a questionable research practice depends on the behaviour of other resarchers: more lighter transgressions will result in a higher frequency of outright manipulation – to guarantee a unique result and the corresponding acclaim which this brings.

Therefore a transparency policy that reduces lighter transgressions does not, as might be expected at first glance, lead to more severe misbehaviour. On the contrary, reducing the incidence of lighter misdemeanour will reduce competitiveness of the race to publication and thus ease the pressure of engaging in questionable practises.

Other possible policies could aim at reducing more severe transgressions – such as data fabrication – by using the relevant statistical techniques. But this could increase the rewards and frequency of lighter transgressions, making the overall effect on the reliability of scientific results unclear.

Mathematical models are especially useful when they address policy changes that are not amenable to direct experimentation. This is because it is the theory that bridges the gap between the current status quo and the proposed new one. Performing direct experiments on researcher misconduct is costly and difficult, but the potential effects of proposed reforms can still be evaluated by using economic theory.

Our model teaches us that we should feel confident that implementing the transparency proposals will help science fulfil its purpose of discovering the truth.

Want to write?

Write an article and join a growing community of more than 180,900 academics and researchers from 4,919 institutions.

Register now