Menu Close

Fraud and trouble with replication are chemistry’s problems too

How many times do we have to try before we are able to repeat those results? Queen's University

Scientific fraud has raised its ugly head once more. In a note to chemists in the journal Organic Letters, Amos Smith, the editor-in-chief, has announced that an analysis of data submitted to the journal has revealed evidence of manipulation. While “these ethical violations comprise only a small fraction of the data submitted to the Journal”, they are certainly unacceptable.

Organic Letters is a top-tier chemistry journal, although any instances of fraud in any journal are unacceptable and the chemistry community must act to prevent them. Journals can do this by employing a data analyst to check submissions, although the onus and responsibility lies with the corresponding author (often the principal investigator). The fundamental message is that chemists must be vigilant and check their data carefully and instill in their research group the highest levels of ethical and moral conduct.

Even if data are not manipulated, there are occasions when scientists are unable to reproduce results of studies published in even well respected journals. There have been several high profile cases in the past, such as the 1989 report by eminent electrochemists Stanley Pons and Martin Fleischmann of cold nuclear fusion, or the work of the physicist Jan Hendrik Schön in the Bell labs in 2001 on semiconductors.

Although the majority of scientists do produce high quality, reproducible results, there are occasional cases of fraud or difficulties in reproducing reported scientific results. Indeed, it is likely that there are many such cases within the scientific literature. The reasons for this are numerous, but the pressures on researchers to produce results that would be classified as high impact are likely a significant factor.

Life science leads the way

In May, the journal Nature announced it was introducing measures to increase the reliability and reproducibility of its published research. This will involve a checklist for life sciences articles to prompt authors and referees to ensure that key methodological details are reported.

The idea is that Nature would allow increased space for more data, including raw data behind graphs and figures, analytical design elements, characterisation of key chemicals, cell lines and antibodies, and more precise descriptions of statistics.

Nature should be commended for trying to enhance good reporting standards so that other researchers will have increased confidence in the results and increased likelihood of being able to reproduce the results. Of course, the results reported in many life science papers would not be easy to reproduce. Nonetheless, increased reporting and transparency can only help validate the results and provide detailed information should another researcher investigate further.

The publication of raw data within research fields other than just life sciences is unlikely to completely prevent such difficulties, but it could go some way to help. Many scientific journals already insist on publication of considerable amounts of supporting information.

Issues with reproducibility arise, of course, when other researchers attempt to repeat published data, but fail to do so. This failure is often ascribed to differences in the way the experiment was run. The chemicals would be different from the original, and may contain (or not contain) small amounts of impurities (for example trace metals, salts, or water), or the conditions, apparatus or reaction design could differ from that reported. So even with added details, it is not possible to conduct an identical experiment. This makes it hard to completely rule out that the published results do not in fact work as reported. What can be done in such circumstances?

Bring on a revolution

One answer is the use of blogs to discuss the results of published work. There are several of these online forums and they provide a means to debate and improve or even contest research reported in the literature.

Would other scientists who decide to explore the same research come across such a blog? With a link in the original article this could well be a possibility.

However, as the Nature article states: “Those who document the validity or irreproducibility of a published piece of work seldom get a welcome from journals”. Does this have to be the case?

A coherent approach and forum to debate the results that are reported by a journal could come from the journal, the editors of which ought to have a vested interest in the validity of what they publish. Journals could establish properly moderated online blogs. This would allow links to the original article so that any interested research group could explore details further. This would save time, effort and money when repeating work where another laboratory has already tackled and hopefully resolved a difficulty.

Most scientists are busy with their own work and do not have time to scrutinise the work of others. However, where issues of reproducibility have emerged, these are generally ignored rather than aired.

Reporting every failed repetition of scientific literature is clearly inappropriate. But when significant effort has been made to repeat published work without success, then surely it is in the scientific community’s interest that this effort is not buried in laboratory notebooks somewhere. Such results would typically not be publishable in their own right, but there is now opportunity to discuss minor improvements, comments or difficulties simply through an appropriate online forum. A trusted and moderated site with links to the original article could provide this. It may even go some way to encourage more detailed supporting information and help prevent or minimise fraud.

Want to write?

Write an article and join a growing community of more than 182,000 academics and researchers from 4,940 institutions.

Register now