Scientists are facing a reproducibility crisis.
Y Photo Studio/shutterstock.com
Science is in a reproducibility crisis. This is driven in part by invalid statistical analyses that happen long after the data are collected – the opposite of how things are traditionally done.
What can an algorithm find when it reads a book?
Some AI technologies aren't advanced enough to provide useful insights, but simpler tools can yield new opportunities to explore the humanities.
Wansink’s research showed plate size matters when it comes to how much we eat.
The journal of the American Medical Association (JAMA) recently retracted several papers by a leading researcher on food and consumption. What does this mean for the researcher's findings?
Biobanks can help scientists retain quality samples for future experiments.
Most biobanks, whether small or large, have high quality assurance and control measures in place.
Academic journals rely on peer review to support editors in making decisions about what to publish.
There's peer review – and then there's peer review. With more knowledge you can dive in a little deeper and make a call about how reliable a science paper really is.
Playing violent video games doesn’t make kids more aggressive.
AP Photo/Paul Sakuma
For years, there have been questions about research showing connections between playing violent video games and aggressive behavior.
It may take time for a tiny step forward to show its worth.
Scientists are rewarded with funding and publications when they come up with innovative findings. But in the midst of a 'reproducibility crisis,' being new isn't the only thing to value about research.
Science itself needs to be put under the microscope and carefully scrutinised to deal with its flaws.
We are observing two new phenomena. On one hand doubt is shed on the quality of entire scientific fields or sub-fields. On the other this doubt is played out in the open, in the media and blogosphere.
Step one is not being afraid to reexamine a site that’s been previously excavated.
Dominic O'Brien. Gundjeihmi Aboriginal Corporation
A team of archaeologists strived to improve the reproducibility of their results, influencing their choices in the field, in the lab and during data analysis.
Opening up data and materials helps with research transparency.
REDPIXEL.PL via Shutterstock.com
Partly in response to the so-called 'reproducibility crisis' in science, researchers are embracing a set of practices that aim to make the whole endeavor more transparent, more reliable – and better.
When new discoveries are jealously guarded under lock and key, science suffers.
A century-old case of scientific fraud illustrates how hard it is to untangle the truth when access to new discoveries is limited.
Online tools are changing the way psychology research is conducted.
Tools like Amazon's Mechanical Turk allow psychology researchers to recruit test subjects from around the world. But the system can also be exploited.
Science and integrity is under the microscope.
We asked three experts for their takes.
Experiment design affects the quality of the results.
IAEA Seibersdorf Historical Images
Embracing more rigorous scientific methods would mean getting science right more often than we currently do. But the way we value and reward scientists makes this a challenge.
Good science loses out when bad science gets the funding.
New studies on the quality of published research shows we could be wasting billions of dollars a year on bad science, to the neglect of good science projects.
In scientific research, repetition is good.
Scientists build on knowledge gained and published by others. How can we know which findings to trust?
Weighing the evidence.
Meta-analyses that combine many different studies are the gold standard for medical evidence. But they are only as good research they examine.
Computer… or black box for data?
Virtually every researcher relies on computers to collect or analyze data. But when computers are opaque black boxes that manipulate data, it's impossible to replicate studies – a core value for science.
Run a study again and again – should the results hit the same bull’s-eye every time?
The field of psychology is trying to absorb a recent big study that was able to replicate only 36 out of 100 major research papers. That finding is an issue, but maybe not for the reason you think.
What does it mean if the majority of what’s published in journals can’t be reproduced?
Researchers from around the globe tried to replicate 100 published psychology studies. They were successful on only 36.