Twenty years ago, it was difficult to find information about local restaurants, except from the restaurants themselves. Now, thanks to the Internet, independent evaluations are easy to find. It’s past time we make that the case for scientific research, too.
At its frontier, science is unpolished and uneven. The findings come from new machines or procedures. Often these are not fully understood – as Einstein is claimed to have said: “if we knew what we were doing, it wouldn’t be called research”.
Humility is thus important, but you won’t always find it in the popular accounts that trumpet new work. To some extent, that reflects how the researchers themselves portray their work. We scientists sometimes prefer to brush aside any possibility that our findings reflect an error. We also tend to be over-optimistic that our findings are of wide generality, rather than being contingent on very specific circumstances.
In scientists, like people generally, biases are inevitable. To avoid being too certain in scientific conclusions, then, we need to see the opinions on both sides of an issue. Unfortunately, however, many disagreements among researchers are systematically concealed rather than revealed.
New results undergo something called “peer review”. That peer review often reveals the complexity, uncertainty, and disagreement inherent in cutting-edge work, but it is not made available to the public.
As a result, science as seen by the public eye is a lot more certain, universal, and uncontroversial than it actually is.
The peer review process
Humans are biased creatures. Secretly or not so secretly, researchers will cheer for their favorite theories. Individual scientific judgments, therefore, should not be trusted blindly.
Fortunately, we scientists are frequently in dialogue with each other. When presenting my work, I often find myself prodded to take a harder look at my own ideas, and at the rigor of my methods. I am dragged to the realization that my critics are actually right about some things.
Much of this back-and-forth happens in journal peer review. After I submit an article to a journal, an editor sends it to two or three other researchers. These “peer reviewers”, who frequently are drawn from the world’s foremost experts on the article’s topic, are tasked with evaluating the article and the studies it reports.
The experts’ comments can be a sundry mix of criticism and praise, with thoughts regarding the analyses of the data, the procedures used, how exactly the findings compare to those of previous work, and the strength of the presented evidence for my article’s conclusions.
Even in those cases where the peer reviews are brief, they include the experts’ endorsement of particular aspects of an article, which would be highly valuable to some readers.
Sometimes the reviews are quite lengthy, and the knowledge within them cannot be found anywhere else. For example, the last three peer reviews that I wrote were each over a thousand words, coming in at sixteen pages of text all up. Many of the individual comments are of little interest to anyone not working in the field, but together they can add up to broader implications for the credibility of the conclusions.
Authors typically respond to peer reviews by incorporating some of their points, fixing overt errors, and shifting parts of their argument to reduce any reliance on dubious assumptions. However, the researchers sometimes avoid directly addressing any contentious issues, preferring instead to sweep them under the rug. The concerns of the peer reviewers can then be undetectable to future readers.
Avoiding narrative detours to discuss blemishes or questionable assumptions can be important for getting across the main point of a study. This is the reason I craft my own articles as a tidy story – or so I tell myself. I want my articles to shine. I must admit, though, that I also want readers to overlook any jumps on the path that leads to my conclusion.
A sanitized version of science
Scientific journals consider peer reviews confidential, and allow only the article’s authors, the two or three reviewers, and the journal editor to see them. What readers and the world see, then, is a sanitized version of science. The consumers of research – be they other researchers, engineers, policy-makers, journalists, or pharmaceutical firms – are deprived of the information in the peer reviews.
When writing an article on a new finding, journalists end up having to arrange for external evaluation themselves. They ring up any experts they can find and ask them about possible problems. In so doing, they are attempting to re-create a review process that has already been done. Rarely will they get comments as extensive as those in the formal peer review, and they get these comments for only a tiny fraction of the new findings that are published each day.
If the evaluation done in the original peer review process were public, news accounts would be less credulous, and public understanding of science might be more sophisticated. Researchers, having the benefit of being able to read peer review comments along with new work, would be less likely to assume that a finding is solid, an attitude that contributed to the replication crisis.
Post-pandemic, a new research world
As the coronavirus outbreak spread last year, there was a broad realization that peer review of Covid research was too important to be kept behind closed doors. Researchers rapidly assessed new work and posted their comments on Internet forums such as Twitter and the research-commenting site PubPeer.
Twitter discussions can be chaotic, and the twitter algorithm does not reward nuance. The excesses that have resulted are a reason that not all researchers have applauded the increase in open commenting, so with the passing of the pandemic, there is a danger that science may shrink back into its shell. Fortunately, public peer review initiatives designed by researchers, rather than by social media companies looking to monetize outrage, have now attracted substantial numbers of experts.
Over the last twenty years, thanks to something called the “open access” movement, we’ve seen the proportion of scientific articles that can be read for free go from a tiny minority to nearly half. Over the next twenty years, to achieve real public understanding of the nature of new findings, we must also open up peer review.