It may take time for a tiny step forward to show its worth.
Scientists are rewarded with funding and publications when they come up with innovative findings. But in the midst of a 'reproducibility crisis,' being new isn't the only thing to value about research.
Science itself needs to be put under the microscope and carefully scrutinised to deal with its flaws.
We are observing two new phenomena. On one hand doubt is shed on the quality of entire scientific fields or sub-fields. On the other this doubt is played out in the open, in the media and blogosphere.
Step one is not being afraid to reexamine a site that’s been previously excavated.
Dominic O'Brien. Gundjeihmi Aboriginal Corporation
A team of archaeologists strived to improve the reproducibility of their results, influencing their choices in the field, in the lab and during data analysis.
Opening up data and materials helps with research transparency.
REDPIXEL.PL via Shutterstock.com
Partly in response to the so-called 'reproducibility crisis' in science, researchers are embracing a set of practices that aim to make the whole endeavor more transparent, more reliable – and better.
When new discoveries are jealously guarded under lock and key, science suffers.
A century-old case of scientific fraud illustrates how hard it is to untangle the truth when access to new discoveries is limited.
Online tools are changing the way psychology research is conducted.
Tools like Amazon's Mechanical Turk allow psychology researchers to recruit test subjects from around the world. But the system can also be exploited.
Science and integrity is under the microscope.
We asked three experts for their takes.
Experiment design affects the quality of the results.
IAEA Seibersdorf Historical Images
Embracing more rigorous scientific methods would mean getting science right more often than we currently do. But the way we value and reward scientists makes this a challenge.
Good science loses out when bad science gets the funding.
New studies on the quality of published research shows we could be wasting billions of dollars a year on bad science, to the neglect of good science projects.
In scientific research, repetition is good.
Scientists build on knowledge gained and published by others. How can we know which findings to trust?
Weighing the evidence.
Meta-analyses that combine many different studies are the gold standard for medical evidence. But they are only as good research they examine.
Computer… or black box for data?
Virtually every researcher relies on computers to collect or analyze data. But when computers are opaque black boxes that manipulate data, it's impossible to replicate studies – a core value for science.
Run a study again and again – should the results hit the same bull’s-eye every time?
The field of psychology is trying to absorb a recent big study that was able to replicate only 36 out of 100 major research papers. That finding is an issue, but maybe not for the reason you think.
What does it mean if the majority of what’s published in journals can’t be reproduced?
Researchers from around the globe tried to replicate 100 published psychology studies. They were successful on only 36.
How much of the research in these journals could be reproduced?
Tobias von der Haar
It's a problem when much of what winds up in scientific journals isn't replicable, for various reasons. The research community is taking baby steps toward addressing the "reproducibility crisis."
Scientists are often untrained in methods to make their research replicable.
Over the past few years, there has been a growing awareness that many experimentally established “facts” don’t seem to hold up to repeated investigation. This was highlighted in a 2010 article in the New…
How many times do we have to try before we are able to repeat those results?
Scientific fraud has raised its ugly head once more. In a note to chemists in the journal Organic Letters, Amos Smith, the editor-in-chief, has announced that an analysis of data submitted to the journal…