To justify their position, Liberal-National MP George Christensen and AgForce’s Michael Guerin specifically invoked the “replication crisis” in science, in which researchers in various fields have found it difficult or impossible to reproduce and validate original research findings. Their proposal, however, is not a good solution to the problem.
The more important context is that these politicians and lobbyists are opposed to new laws to curb agricultural runoff onto the Great Barrier Reef that are underpinned by research finding evidence of harm from poor water quality. Christensen suggests that many scientific papers behind such regulation “have never been tested and their conclusions may be wrong”. But Christensen seems to be targeting specific results he doesn’t like, rather than trying to improve scientific practice in a systematic way.
In various scientific areas, including psychology and preclinical medicine, large-scale replication projects have failed to reproduce the findings of many original studies. The rates of success differ between fields, but on average only half or fewer of published studies were successfully replicated. Clearly there is a problem.
Much of the problem is due to hyper-competitiveness in science, funding shortfalls, publication practices, and the use of performance metrics that privilege quantity over quality.
Scientists themselves have documented the poor practices that underlie this crisis, such as the misuse of statistics, often unwittingly, in ways that bias findings towards attention-grabbing conclusions. These practices distort the evidence available to policy-makers and other researchers.
Scientists have also already produced responses to some problems: reforms in peer review, guidelines for methods and statistical reporting, and new platforms for data sharing. These improvements are possible only by taking the replication crisis seriously. Paying lip service to it so as to attack particular legislation is the opposite of this.
Making decisions under uncertainty
Establishing an agency with a mission to adjudicate on hand-picked scientific results would make things worse.
At best, such an agency will be one more review panel. At worst, it will be a bureaucratic front for the political agenda of the day. Either way, it will make scientists even more cautious, and delay the flow of information to policy-makers.
The track records of the lobbyists involved in this latest move suggests that they have little genuine interest in improving science. AgForce reportedly deleted more than a decade’s worth of data meant for a government water quality program in advance of the new runoff regulations taking effect.
Exploiting scientific uncertainty has long been a classic tactic of industry lobbyists. It has been used to justify inaction on everything from tobacco to climate change. Local politicians and lobby groups seem to be copying moves from a well-worn overseas playbook in their misuse of the replication crisis.
Scientists can never make pronouncements with the certainty of a politician. But if, as a society, we want to benefit fully from science, we need to accept the idea of scientific uncertainty. The existence of uncertainties does not justify rejection of the best available evidence.
To defend science we need to improve it
It is tempting to respond to politically motivated attacks on science by simply pointing to the excellent track record of scientific knowledge, or the good intentions of the vast majority of scientists.
But there is a better reason: scientists themselves have been improving science. As advocates of reform, we have been told that pointing out problems helps the anti-science movement. We disagree: being open about our work to improve science is essential for building public trust.
Science is something that humans do. It is self-correcting when, and only when, scientists correct it. Research is hard work, and we can’t expect scientists never to make errors or to provide complete certainty. But we can expect scientists to create a culture that values detecting and correcting errors.
Admitting errors in one’s own work, finding them in others’ work, reporting them, retracting results when necessary, and correcting the record are activities that should be the most highly regarded of scientific practices. We need to shift the balance of rewards away from rewarding only groundbreaking discoveries, and towards the painstaking work of confirmation.
A cultural shift in this regard is already underway, to better align scientific practices with scientific values. But there is more to be done, and governments can help.
There are sensible policies to support the open science initiatives that will reduce error production and increase error detection in scientific work. Different fields need different approaches, but here are two ideas.
First, improve funding allocation procedures. Reward self-correcting activities such as replication studies. Don’t require every piece of funded research to be groundbreaking. Don’t rely on flawed metrics. Enforce best-practice data management and open data practices whenever feasible. This can all be done without establishing an inefficient agency whose likely effect is to delay action.
Second, establish a national independent office of research integrity to allow errors in the scientific literature, whether deliberate or accidental, to be corrected in a fair, efficient, and systematic way. Unlike the politicians’ proposal, this would improve the process for all researchers, not just act as a handbrake on research findings that lobbyists don’t like.