Online tools are changing the way psychology research is conducted.
Tools like Amazon's Mechanical Turk allow psychology researchers to recruit test subjects from around the world. But the system can also be exploited.
Science and integrity is under the microscope.
We asked three experts for their takes.
Experiment design affects the quality of the results.
IAEA Seibersdorf Historical Images
Embracing more rigorous scientific methods would mean getting science right more often than we currently do. But the way we value and reward scientists makes this a challenge.
Good science loses out when bad science gets the funding.
New studies on the quality of published research shows we could be wasting billions of dollars a year on bad science, to the neglect of good science projects.
In scientific research, repetition is good.
Scientists build on knowledge gained and published by others. How can we know which findings to trust?
Weighing the evidence.
Meta-analyses that combine many different studies are the gold standard for medical evidence. But they are only as good research they examine.
Computer… or black box for data?
Virtually every researcher relies on computers to collect or analyze data. But when computers are opaque black boxes that manipulate data, it's impossible to replicate studies – a core value for science.
Run a study again and again – should the results hit the same bull’s-eye every time?
The field of psychology is trying to absorb a recent big study that was able to replicate only 36 out of 100 major research papers. That finding is an issue, but maybe not for the reason you think.
What does it mean if the majority of what’s published in journals can’t be reproduced?
Researchers from around the globe tried to replicate 100 published psychology studies. They were successful on only 36.
How much of the research in these journals could be reproduced?
Tobias von der Haar
It's a problem when much of what winds up in scientific journals isn't replicable, for various reasons. The research community is taking baby steps toward addressing the "reproducibility crisis."
Scientists are often untrained in methods to make their research replicable.
Over the past few years, there has been a growing awareness that many experimentally established “facts” don’t seem to hold up to repeated investigation. This was highlighted in a 2010 article in the New…
How many times do we have to try before we are able to repeat those results?
Scientific fraud has raised its ugly head once more. In a note to chemists in the journal Organic Letters, Amos Smith, the editor-in-chief, has announced that an analysis of data submitted to the journal…