tag:theconversation.com,2011:/au/topics/retractions-17786/articlesRetractions – The Conversation2024-02-06T13:30:37Ztag:theconversation.com,2011:article/2224452024-02-06T13:30:37Z2024-02-06T13:30:37ZPeer review isn’t perfect − I know because I teach others how to do it and I’ve seen firsthand how it comes up short<figure><img src="https://images.theconversation.com/files/573563/original/file-20240205-21-woj56j.jpg?ixlib=rb-1.1.0&rect=19%2C58%2C6489%2C4251&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Quality in academic research can be compromised when diversity of experience is lacking among the reviewers.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/middle-aged-men-in-a-library-royalty-free-image/1369631769?phrase=professor+at+computer&adppopup=true">Ika84 via Getty Images</a></span></figcaption></figure><p>When I teach research methods, a major focus is <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975196/">peer review</a>. As a process, peer review evaluates academic papers for their quality, integrity and impact on a field, largely shaping what scientists accept as “knowledge.” By instinct, any academic follows up a new idea with the question, “Was that peer reviewed?”</p>
<p>Although I believe in the importance of peer review – and I help do peer reviews for several academic journals – I know how vulnerable the process can be. Not only have academics questioned <a href="https://doi.org/10.1080/00091383.1982.10569910">peer review reliability</a> for decades, but the retraction of more than <a href="https://www.nature.com/articles/d41586-023-03974-8">10,000 research papers in 2023</a> set a new record.</p>
<p>I had my first encounter with the flaws in the peer review process in 2015, during my first year as a Ph.D. student in educational psychology at a large land-grant university in the Pacific Northwest.</p>
<p>My adviser published some of the most widely cited studies in educational research. He served on several editorial boards. Some of the most recognized journals in learning science solicited his review of new studies. One day, I knocked on his office door. He answered without getting up from his chair, a printed manuscript splayed open on his lap, and waved me in.</p>
<p>“Good timing,” he said. “Do you have peer review experience?”</p>
<p>I had served on the editorial staff for literary journals and reviewed poetry and fiction submissions, but I doubted much of that transferred to scientific peer review.</p>
<p>“Fantastic.” He smiled in relief. “This will be real-world learning.” He handed me the manuscript from his lap and told me to have my written review back to him in a week.</p>
<p>I was too embarrassed to ask how one actually does peer review, so I offered an impromptu plan based on my prior experience: “I’ll make editing comments in the margins and then write a summary about the overall quality?”</p>
<p>His smile faded, either because of disappointment or distraction. He began responding to an email.</p>
<p>“Make sure the methods are sound. The results make sense. Don’t worry about the editing.”</p>
<p>Ultimately, I fumbled my way through, saving my adviser time on one less review he had to conduct. Afterward, I did receive good feedback and eventually became a confident peer reviewer. But at the time, I certainly was not a “peer.” I was too new in my field to evaluate methods and results, and I had not yet been exposed to enough studies to identify a surprising observation or to recognize the quality I was supposed to control. Manipulated data or subpar methods could easily have gone undetected.</p>
<h2>Effects of bias</h2>
<p>Knowledge is not self-evident. A survey can be designed with a <a href="https://www.qualtrics.com/experience-management/research/survey-bias/">problematic amount of bias</a>, even if unintentional.</p>
<p>Observing a phenomenon in one context, such as an intervention helping white middle-class children learn to read, <a href="https://journals.sagepub.com/doi/full/10.1177/1086296X19877463">may not necessarily yield insights</a> for how to best teach reading to children in other demographics. Debates over “the science of reading” in general have lasted decades, with researchers arguing over <a href="https://www.vox.com/23815311/science-of-reading-movement-literacy-learning-loss">constantly changing “recommendations</a>,” such as whether to teach phonics or the use of context cues.</p>
<p>A correlation – a student who bullies other students and plays violent video games – <a href="https://www.nature.com/articles/nmeth.3587">may not be causation</a>. We do not know if the student became a bully because of playing violent video games. Only experts within a field would be able to notice such differences, and even then, experts do not always agree on what they notice.</p>
<p>As individuals, we can very often be limited by our own experiences. Let’s say in my life I only see white swans. I might form the knowledge that only white swans exist. Maybe I write a manuscript about my lifetime of observations, concluding that all swans are white. I submit that manuscript to a journal, and a “peer,” someone who also has observed a lot of swans, says, “Wait a minute, I’ve seen black swans.” That peer would communicate back to me their observations so that I can refine my knowledge.</p>
<p>The peer plays a pivotal role evaluating observations, with the overall goal of advancing knowledge. For example, if the above scenario were reversed, and peer reviewers who all believed that all swans were white came across the first study observing a black swan, the study would receive a lot of attention as researchers scrambled to replicate that observation. So why was a first-year graduate student getting to stand in for an expert? Why would my review count the same as a veteran’s review? One answer: The process relies <a href="https://www.insidehighered.com/news/2022/06/13/peer-review-crisis-creates-problems-journals-and-scholars">almost entirely on unpaid labor</a>. </p>
<p>Despite the fact that peers are professionals, peer review is not a profession. </p>
<p>As a result, the same overworked scholars often receive the bulk of the peer review requests. Besides the labor inequity, a small pool of experts can lead to a narrowed process of what is publishable or what counts as knowledge, directly threatening <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2023.1120938/full">diversity of perspectives and scholars</a>.</p>
<p>Without a large enough reviewer pool, the process can easily fall victim to politics, arising from a small community recognizing each other’s work and compromising conflicts of interest. Many of the issues with peer review can be addressed by professionalizing the field, either through official recognition or compensation.</p>
<h2>Value despite challenges</h2>
<p>Despite these challenges, I still tell my students that peer review offers the best method for evaluating studies and advancing knowledge. Consider the statistical phenomenon suggesting that groups of people are more likely to arrive at “right answers” than individuals.</p>
<p>In his book “<a href="https://www.penguinrandomhouse.com/books/175380/the-wisdom-of-crowds-by-james-surowiecki/">The Wisdom of Crowds</a>,” author James Surowiecki tells the story of a county fair in 1906, where fairgoers guessed the weight of an ox. Sir Francis Galton averaged the 787 guesses and arrived at 1,197 pounds. The ox weighed 1,198 pounds.</p>
<p>When it comes to science and the reproduction of ideas, the wisdom of the many can account for individual outliers. Fortunately, and ironically, this is how science discredited Galton’s take on eugenics, which has <a href="https://science.howstuffworks.com/innovation/big-thinkers/francis-galton.htm">overshadowed his contributions to science</a>. </p>
<p>As a process, peer review theoretically works. The question is whether the peer will get the support needed to effectively conduct the review.</p><img src="https://counter.theconversation.com/content/222445/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>JT Torres does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Politics and the lack of compensation are among the factors that can undermine the peer review process, which is important to the quality of knowledge in academic journals.JT Torres, Director of the Center for Teaching and Learning, Quinnipiac UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2167092023-12-14T13:16:36Z2023-12-14T13:16:36ZWhen authoritative sources hold onto bad data: A legal scholar explains the need for government databases to retract information<figure><img src="https://images.theconversation.com/files/564733/original/file-20231211-15-sxj8oy.jpg?ixlib=rb-1.1.0&rect=0%2C5%2C3888%2C2578&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Government information sources like the U.S. patent database often file bad information without labeling it or providing a way to retract it.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/junk-drawer-label-royalty-free-image/485963757">Thinglass/iStock via Getty Images</a></span></figcaption></figure><p>In 2004, Hwang Woo-suk was celebrated for his breakthrough discovery creating <a href="https://doi.org/10.1126/science.1094515">cloned human embryos</a>, and his work was published in the prestigious journal Science. But the discovery <a href="https://www.nytimes.com/2009/10/27/world/asia/27clone.html?unlocked_article_code=1.D00.aGQ6.J19oSJ1JE6oX&smid=url-share">was too good to be true</a>; Dr. Hwang had fabricated the data. Science publicly retracted the article and assembled a team to <a href="https://doi.org/10.1126/science.1137840">investigate what went wrong</a>.</p>
<p>Retractions are frequently in the news. The high-profile discovery of a room-temperature superconductor <a href="https://www.wsj.com/science/superconductor-paper-retracted-journal-nature-ranga-dias-c437ce6e">was retracted</a> on Nov. 7, 2023. A series of retractions <a href="https://www.washingtonpost.com/education/2023/07/19/stanford-university-marc-tessier-lavigne-research-controversy/">toppled the president</a> of Stanford University on July 19, 2023. Major early studies on COVID-19 were found to have <a href="https://doi.org/10.1126/science.abd1697">serious data problems</a> and retracted on June 4, 2020. </p>
<p>Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.</p>
<p>A far more pernicious problem occurs when information is not, and cannot, be retracted. There are many apparently authoritative sources that contain flawed information. Sometimes the flawed information is deliberate, but sometimes it isn’t – after all, to err is human. Often, there is no correction or retraction mechanism, meaning that information known to be wrong remains on the books without any indication of its flaws. </p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=SlW0VEkAAAAJ&view_op=list_works&sortby=pubdate">patent and intellectual property legal scholar</a>, I’ve found that this is a particularly harmful problem with government information, which is often considered a <a href="https://dx.doi.org/10.2139/ssrn.4372254">source of trustworthy data but is prone to error</a> and often lacking any means to retract the information.</p>
<h2>Patent fictions and fraud</h2>
<p>Consider patents, documents that contain many technical details that can be <a href="https://doi.org/10.1038/nbt.3864">useful to scientists</a>. There is <a href="https://doi.org/10.1162/rest_a_01353">no way to retract a patent</a>. And patents contain <a href="https://dx.doi.org/10.2139/ssrn.3538746">frequent errors</a>: Although patents are reviewed by an expert examiner before being granted, <a href="https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5848&context=flr">examiners do not check</a> whether the scientific data in the patent is correct.</p>
<p>In fact, the U.S. Patent and Trademark Office permits patentees to include <a href="https://doi.org/10.1126/science.aax0748">fictional experiments and data</a> in patents. This practice, called <a href="https://doi.org/10.1126/science.aax0748">prophetic examples</a>, is common; about <a href="https://dx.doi.org/10.2139/ssrn.3202493">25% of life sciences patents contain fictional experiments</a>. The patent office requires that prophetic examples be written in the present or future tense while real experiments can be written in the past tense. But this is confusing to nonspecialists, including scientists, who tend to assume that a phrase like “X and Y are mixed at 300 degrees to achieve a 95% yield rate” indicates a real experiment. </p>
<p>Almost a decade after Science retracted the journal article claiming cloned human cells, <a href="https://www.nytimes.com/2014/02/15/science/disgraced-scientist-granted-us-patent-for-work-found-to-be-fraudulent.html?searchResultPosition=1">Dr. Hwang received a U.S patent</a> on his retracted discovery. Unlike the journal article, this patent has not been retracted. The patent office did not investigate the accuracy of the data – indeed, it granted the patent long after the data’s inaccuracy had been publicly acknowledged – and there is no indication on the face of the patent that it contains information that has been retracted elsewhere. </p>
<p>This is no anomaly. In a similar example, Elizabeth Holmes, the former – now imprisoned – CEO of Theranos, <a href="https://doi.org/10.1162/rest_a_01353">holds patents</a> on her thoroughly discredited claims for a small device that could rapidly run many tests on a small blood sample. Some of those patents were granted long after Theranos’ fraud headlined major newspapers.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A document containing numbers and text" src="https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=773&fit=crop&dpr=1 600w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=773&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=773&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=972&fit=crop&dpr=1 754w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=972&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=972&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The U.S. Patent and Trademark Office granted a patent to Theranos on Dec. 18, 2018, three months after the company was dissolved following a series of investigations and lawsuits that detailed its fraud. The patent has not been rescinded and contains no notice of the faulty nature of the information it contains.</span>
<span class="attribution"><a class="source" href="https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/10156579">U.S. Patent and Trademark Office</a></span>
</figcaption>
</figure>
<h2>Long-lived bad information</h2>
<p>This sort of under-the-radar wrong data can be deeply misleading to readers. The system of retractions in scientific journals is not without its critics, but it compares favorably to the alternative of no retractions. Without retractions, readers don’t know when they are looking at incorrect information. </p>
<p>My colleague <a href="https://scholar.google.com/citations?hl=en&user=jYI7hFEAAAAJ&view_op=list_works&sortby=pubdate">Soomi Kim</a> and I conducted a study of patent-paper pairs. We looked at cases where the same information was published in a journal article and in a patent by the same scientists, and the journal paper had subsequently been retracted. We found that while citations to papers dropped steeply after the paper was retracted, there was <a href="https://doi.org/10.1162/rest_a_01353">no reduction in citations to patents</a> with the very same incorrect information. </p>
<p>This probably happened because scientific journals paint a big red “retracted” notice on retracted articles online, informing the reader that the information is wrong. By contrast, patents have no retraction mechanism, so incorrect information continues to spread.</p>
<p>There are many other instances where <a href="https://dx.doi.org/10.2139/ssrn.4372254">authoritative-looking information is known to be wrong</a>. The Environmental Protection Agency publishes emissions data supplied by companies but not reviewed by the agency. Similarly, the Food and Drug Administration disseminates official-looking information about drugs that is generated by drug manufacturers and posted without an evaluation by the FDA.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3jYqgTKVQGE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Retractions play an important role in science.</span></figcaption>
</figure>
<h2>Consequences of nonretractions</h2>
<p>There are also economic consequences when incorrect information can’t be easily corrected. The Food and Drug Administration publishes <a href="https://www.fda.gov/drugs/drug-approvals-and-databases/approved-drug-products-therapeutic-equivalence-evaluations-orange-book">a list of patents</a> that cover brand-name drugs. The FDA won’t approve a generic drug unless the generic manufacturer has shown that each patent that covers the drug in question is expired, not infringed or invalid.</p>
<p>The problem is that the list of patents is <a href="https://dx.doi.org/10.2139/ssrn.4372254">generated by the brand-name drug manufacturers</a>, who have an incentive to list patents that <a href="https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-challenges-more-100-patents-improperly-listed-fdas-orange-book">don’t actually cover their drugs</a>. Doing so increases the burden on generic drug manufacturers. The list is not checked by the FDA or anyone else, and there are few mechanisms for anyone other than the brand-name manufacturer to tell the FDA to remove a patent from the list. </p>
<p>Even when retractions are possible, they are effective only when readers pay attention to them. Financial data is sometimes retracted and corrected, but the revisions are not timely. “<a href="https://www.wsj.com/finance/investing/economic-data-lead-markets-and-governments-astray-abd79102">Markets don’t tend to react to revisions</a>,” Paul Donovan, chief economist of UBS Global Wealth Management, told the Wall Street Journal, referring to governments revising gross domestic product figures.</p>
<p>Misinformation is a growing problem. There are no easy answers to solve it. But there are steps that would almost certainly help. One relatively straightforward one is for trusted data sources like those from the government to follow the lead of scientific journals and create a mechanism to retract erroneous information.</p><img src="https://counter.theconversation.com/content/216709/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Janet Freilich does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Theranos was dissolved years ago, and its CEO, Elizabeth Holmes, is in prison, but the company’s patents based on bad science live on – a stark example of the persistence of faulty information.Janet Freilich, Associate Professor of Law, Fordham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2114102023-12-06T13:27:18Z2023-12-06T13:27:18ZIntellectual humility is a key ingredient for scientific progress<figure><img src="https://images.theconversation.com/files/563651/original/file-20231205-15-8od38k.jpg?ixlib=rb-1.1.0&rect=108%2C9%2C6484%2C4260&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Would technologies like the airplane ever get off the ground without people balancing commitment to their vision with openness to new ideas?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/on-december-17-at-10-30am-at-kitty-hawk-north-carolina-this-news-photo/1371400707">HUM Images/Universal Images Group via Getty Images</a></span></figcaption></figure><p>The virtue of intellectual humility is getting a lot of attention. It’s heralded as a part of <a href="https://wisdomcenter.uchicago.edu/news/wisdom-news/what-does-intellectual-humility-look">wisdom</a>, an aid to <a href="https://greatergood.berkeley.edu/article/item/five_reasons_why_intellectual_humility_is_good_for_you">self-improvement</a> and a catalyst for <a href="https://doi.org/10.1177/0146167221997619">more productive political dialogue</a>. While researchers define intellectual humility in various ways, the core of the idea is “<a href="https://www.templeton.org/wp-content/uploads/2020/08/JTF_Intellectual_Humility_final.pdf">recognizing that one’s beliefs and opinions might be incorrect</a>.”</p>
<p>But achieving intellectual humility is hard. <a href="https://doi.org/10.1257/aer.20190668">Overconfidence is a persistent problem</a>, faced by many, and does <a href="https://doi.org/10.1016/j.obhdp.2008.02.007">not appear to be improved</a> by education or expertise. Even scientific pioneers can sometimes lack this valuable trait.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="black and white photo of man with white beard" src="https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=794&fit=crop&dpr=1 600w, https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=794&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=794&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=998&fit=crop&dpr=1 754w, https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=998&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/563649/original/file-20231205-29-legp7a.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=998&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">William Thomson, known as Lord Kelvin, poses in 1902 with his compass.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/william-thomson-lord-kelvin-scottish-mathematician-and-news-photo/113443136">Universal History Archive/Getty Images</a></span>
</figcaption>
</figure>
<p>Take the example of one of the greatest scientists of the 19th century, <a href="https://www.britannica.com/biography/William-Thomson-Baron-Kelvin">Lord Kelvin</a>, who was not immune to overconfidence. <a href="https://archive.org/details/newark-advocate-1902-04-26/page/n3/mode/2up">In a 1902 interview</a> “on scientific matters now prominently before the public mind,” he was asked about the future of air travel: “(W)e have no hope of solving the problem of aerial navigation in any way?”</p>
<p>Lord Kelvin replied firmly: “No; I do not think there is any hope. Neither the balloon, nor the aeroplane, nor the gliding machine will be a practical success.” The <a href="https://www.britannica.com/biography/Wright-brothers">Wright brothers’ first successful flight</a> was a little over a year later.</p>
<p>Scientific overconfidence is not confined to matters of technology. A few years earlier, Kelvin’s eminent colleague, <a href="https://www.nobelprize.org/prizes/physics/1907/michelson/biographical/">A. A. Michelson</a>, the first American to win a Nobel Prize in science, <a href="https://campub.lib.uchicago.edu/view/?docId=mvol-0005-0003-0002#page/15/mode/1up/">expressed a similarly striking view</a> about the fundamental laws of physics: “It seems probable that most of the grand underlying principles have now been firmly established.”</p>
<p>Over the next few decades – in no small part due to Michelson’s own work – fundamental physical theory underwent its most dramatic changes since the times of Newton, with the development of the theory of relativity and quantum mechanics “<a href="https://philpapers.org/rec/HEIPAP">radically and irreversibly</a>” altering our view of the physical universe.</p>
<p>But is this sort of overconfidence a problem? Maybe it actually helps the progress of science? I suggest that intellectual humility is a better, more progressive stance for science.</p>
<h2>Thinking about what science knows</h2>
<p>As a <a href="https://scholar.google.com/citations?user=aoHQHaAAAAAJ&hl=en">researcher</a> in philosophy of science for over 25 years and one-time editor of the main journal in the field, <a href="https://www.philsci.org/journal.php">Philosophy of Science</a>, I’ve had numerous studies and reflections on the nature of scientific knowledge cross my desk. The biggest questions are not settled.</p>
<p>How confident ought people be about the conclusions reached by science? How confident ought scientists be in their own theories?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="colored etched plate illustrating Earth with planets orbiting around it" src="https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=509&fit=crop&dpr=1 600w, https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=509&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=509&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=640&fit=crop&dpr=1 754w, https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=640&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/563654/original/file-20231205-15-vv17og.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=640&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Eventually astronomy moved past the geocentric model of the universe with Earth at its center, which had stood for centuries.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/plate-from-the-cosmographical-atlas-harmonia-macrocosmica-news-photo/544173270">VCG Wilson/Corbis via Getty Images</a></span>
</figcaption>
</figure>
<p>One ever-present consideration goes by the name “the <a href="https://plato.stanford.edu/entries/scientific-realism/#PessIndu">pessimistic induction</a>,” advanced most prominently in modern times by the philosopher <a href="https://en.wikipedia.org/wiki/Larry_Laudan">Larry Laudan</a>. Laudan pointed out that the history of science is littered with discarded theories and ideas.</p>
<p>It would be near-delusional to think that now, finally, we have found the science that will not be discarded. It is far more reasonable to conclude that today’s science will also, in large part, be rejected, or significantly modified, by future scientists.</p>
<p>But the pessimistic induction is not the end of the story. An equally powerful consideration, advanced prominently in modern times by the philosopher <a href="https://en.wikipedia.org/wiki/Hilary_Putnam">Hilary Putnam</a>, goes by the name “the no-miracles argument.” It would be a miracle, so the argument goes, if successful scientific predictions and explanations were just accidental, or lucky – that is, if the success of science did not arise from its getting something right about the nature of reality.</p>
<p>There must be something right about the theories that have, after all, made air travel – not to mention space travel, genetic engineering and so on – a reality. It would be near-delusional to conclude that present-day theories are just wrong. It is far more reasonable to conclude that there is something right about them.</p>
<h2>A pragmatic argument for overconfidence?</h2>
<p>Setting aside the philosophical theorizing, what is best for scientific progress?</p>
<p>Of course, scientists can be mistaken about the accuracy of their own positions. Even so, there is reason to believe that over the long arc of history – or, in the cases of Kelvin and Michelson, in relatively short order – such mistakes will be unveiled.</p>
<p>In the meantime, perhaps extreme confidence is important for doing good science. Maybe science needs people who tenaciously pursue new ideas with the kind of (over)confidence that can also lead to quaint declarations of the impossibility of air travel or the finality of physics. Yes, it can lead to dead ends, <a href="https://www.science.org/content/article/another-retraction-looms-embattled-physicist-behind-blockbuster-superconductivity">retractions</a> and the like, but maybe that’s just the price of scientific progress.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="black and white photo portrait of man in tailcoat" src="https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=837&fit=crop&dpr=1 600w, https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=837&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=837&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1052&fit=crop&dpr=1 754w, https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1052&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/563656/original/file-20231205-19-x25n13.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1052&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ignaz Semmelweis used antiseptic measures to slash death rates in his hospital.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/ignaz-philip-semmelweis-hungarian-obstetrician-discovered-news-photo/113444168">Universal History Archive via Getty Images</a></span>
</figcaption>
</figure>
<p>In the 19th century, in the face of continued and strong opposition, the Hungarian doctor <a href="https://theconversation.com/ignaz-semmelweis-the-doctor-who-discovered-the-disease-fighting-power-of-hand-washing-in-1847-135528">Ignaz Semmelweis</a> consistently and repeatedly advocated for the importance of sanitation in hospitals. The medical community rejected his idea so severely that he wound up forgotten in a mental asylum. But he was, it seems, right, and <a href="https://doi.org/10.1056/NEJMp048025">eventually the medical community came around</a> to his view.</p>
<p>Maybe we need people who can be committed so fully to the truth of their ideas in order for advances to be made. Maybe scientists should be overconfident. Maybe they should shun intellectual humility.</p>
<p>One might hope, as <a href="https://doi.org/10.4324/9780203979648">some have argued</a>, that the <a href="https://www.britannica.com/science/scientific-method">scientific process</a> – the <a href="https://theconversation.com/retractions-and-controversies-over-coronavirus-research-show-that-the-process-of-science-is-working-as-it-should-140326">review and testing</a> of theories and ideas – will eventually weed out the crackpot ideas and false theories. The cream will rise.</p>
<p>But sometimes it takes a long time, and it isn’t clear that scientific examinations, as opposed to social forces, are always the cause of the downfall of bad ideas. The 19th century (pseudo)science of <a href="https://www.britannica.com/topic/phrenology">phrenology</a> was overturned “as much for its fixation on social categories as for an inability within the scientific community to replicate its findings,” as noted by a <a href="https://doi.org/10.1016/j.cortex.2018.04.011">group of scientists</a> who put a kind of final nail in the coffin of phrenology in 2018, nearly 200 years after its heyday of correlating skull features with mental ability and character.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="masked man in scrubs washing at sink" src="https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/563647/original/file-20231205-15-x25n13.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Today’s health care workers follow careful sanitary protocols – long after Semmelweis first advocated them.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/surgeon-washing-hands-before-egg-collection-news-photo/129377308">Universal Images Group via Getty Images</a></span>
</figcaption>
</figure>
<h2>Intellectual humility as a middle ground</h2>
<p>The marketplace of ideas did produce the right results in the cases mentioned. Kelvin and Michelson were corrected fairly quickly. It took much longer for phrenology and hospital sanitation – and the consequences of this delay were undeniably disastrous in both cases.</p>
<p>Is there a way to encourage vigorous, committed and stubborn pursuit of new, possibly unpopular scientific ideas, while acknowledging the great value and power of the scientific enterprise as it now stands?</p>
<p>Here is where intellectual humility can play a positive role in science. Intellectual humility is not skepticism. It does not imply doubt. An intellectually humble person may have strong commitments to various beliefs – scientific, moral, religious, political or other – and may pursue those commitments with vigor. Their intellectual humility lies in their openness to the possibility, indeed strong likelihood, that nobody is in possession of the full truth, and that others, too, may have insights, ideas and evidence that should be taken into account when forming their own best judgments.</p>
<p>Intellectually humble people will therefore welcome challenges to their ideas, research programs that run contrary to current orthodoxy, and even the pursuit of what might seem to be crackpot theories. Remember, doctors in his time were convinced that Semmelweis was a crackpot.</p>
<p>This openness to inquiry does not, of course, imply that scientists are obligated to accept theories they take to be wrong. What we ought to accept is that we too might be wrong, that something good might come of the pursuit of those other ideas and theories, and that tolerating rather than persecuting those who pursue such things just might be the best way forward for science and for society.</p><img src="https://counter.theconversation.com/content/211410/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael Dickson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. This article was produced with support from UC Berkeley's Greater Good Science Center and the John Templeton Foundation as part of the GGSC's initiative on Expanding Awareness of the Science of Intellectual Humility.</span></em></p>An intellectually humble person may have strong commitments to various beliefs − but balanced with an openness to the likelihood that others, too, may have valuable insights, ideas and evidence.Michael Dickson, Professor of Philosophy, University of South CarolinaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1403262020-07-06T12:12:40Z2020-07-06T12:12:40ZRetractions and controversies over coronavirus research show that the process of science is working as it should<figure><img src="https://images.theconversation.com/files/345365/original/file-20200702-111359-13unxc9.jpg?ixlib=rb-1.1.0&rect=0%2C26%2C3576%2C2344&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A high-profile paper
on the risks of hyrdoxychloroquine was recently and rightfully retracted.</span> <span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Virus-Outbreak-Malaria-Drug-States/195f1b7599b54c8abc2d3cf2da592672/67/0">AP Photo/John Locher,</a></span></figcaption></figure><p><em>Leer <a href="https://theconversation.com/controversias-en-la-investigacion-del-coronavirus-muestran-que-la-ciencia-esta-funcionando-como-deberia-143391">en español</a></em></p>
<p>Several high-profile papers on COVID-19 research have come under fire from people in the scientific community in recent weeks. Two articles addressing the safety of certain drugs when taken by COVID-19 patients were <a href="https://www.nytimes.com/2020/06/04/health/coronavirus-hydroxychloroquine.html">retracted</a>, and researchers are calling for the retraction of a third paper that evaluated behaviors that <a href="https://metrics.stanford.edu/PNAS%20retraction%20request%20LoE%20061820">mitigate coronavirus transmission</a>.</p>
<p>Some people are viewing the retractions as an <a href="https://www.wsj.com/articles/the-lancets-politicized-science-on-antimalarial-drugs-11591053222?cx_testId=3&cx_testVariant=cx_4&cx_artPos=0#cxrecs_s">indictment of the scientific process</a>. Certainly, the overturning of these papers is bad news, and there is plenty of blame to go around. </p>
<p>But despite these short-term setbacks, the scrutiny and subsequent correction of the papers actually show that science is working. Reporting of the pandemic is allowing people to see, many for the first time, the messy business of scientific progress. </p>
<h2>Scientific community quickly responds to flawed research</h2>
<p>In May, two papers were published on the safety of certain drugs for COVID-19 patients. The first, published in the New England Journal of Medicine, claimed that a particular heart medication <a href="http://doi.org/10.1056/NEJMoa2007621">was in fact safe for COVID-19 patients</a>, despite previous concerns. The second, published in The Lancet, claimed that the antimalarial drug <a href="https://doi.org/10.1016/S0140-6736(20)31180-6">hydroxychloroquine increased the risk of death</a> when used to treat COVID-19. </p>
<p>The Lancet paper caused the World Health Organization to briefly <a href="https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---25-may-2020">halt studies investigating hydroxychloroquine</a> for COVID-19 treatment.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=806&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=806&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=806&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1012&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1012&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345113/original/file-20200701-159824-g2k6ku.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1012&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The paper published in The Lancet claimed that hydroxychloroquine increased risk of death in COVID-19 patients, but was retracted when other scientists discovered the data used for the study was unreliable.</span>
<span class="attribution"><a class="source" href="https://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(20)31180-6.pdf">The Lancet/Mandeep R Mehra, Sapan S Desai, Frank Ruschitzka, Amit N Patel</a></span>
</figcaption>
</figure>
<p>Within days, over 200 scientists signed an <a href="https://doi.org/10.5281/zenodo.3871094">open letter</a> highly critical of the paper, noting that some of the findings were simply implausible. The database provided by the tiny company Surgisphere – whose website is no longer accessible – was unavailable during peer review of the paper or to scientists and the public afterwards, preventing anyone from evaluating the data. Finally, the letter suggested that it was unlikely this company was able to obtain the hospital records alleged to be in the database when no one else had access to this information.</p>
<p>[<em>The Conversation’s science, health and technology editors pick their favorite stories.</em> <a href="https://theconversation.com/us/newsletters/science-editors-picks-71/?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=science-favorite">Weekly on Wednesdays</a>.]</p>
<p>By early June, both <a href="https://doi.org/10.1016/S0140-6736(20)31324-6">the Lancet</a> and <a href="https://doi.org/10.1056/NEJMc2021225">New England Journal of Medicine</a> articles were retracted, citing concerns about the integrity of the database the researchers used in the studies. A retraction is the withdrawal of a published paper because the data underlying the major conclusions of the work are found to be seriously flawed. These flaws are sometimes, but not always, due to intentional scientific misconduct. </p>
<p>The urgency to find solutions to the COVID-19 pandemic certainly contributed to the publication of <a href="https://theconversation.com/coronavirus-research-done-too-fast-is-testing-publishing-safeguards-bad-science-is-getting-through-134653">sloppy and possibly fraudulent science</a>. The quality control measures that minimize the publication of bad science failed miserably in these cases.</p>
<h2>Imperfect and iterative</h2>
<p>The retraction of the hydroxychloroquine paper in particular drew immediate attention not only because it placed science in a bad light, but also because <a href="https://www.forbes.com/sites/andrewsolender/2020/05/22/all-the-times-trump-promoted-hydroxychloroquine/#20668b474643">President Trump had touted the drug</a> as an effective treatment for COVID-19 despite the lack of strong evidence.</p>
<p>Responses in the media were harsh. The New York Times declared that “<a href="https://www.nytimes.com/2020/06/14/health/virus-journals.html?searchResultPosition=1">The pandemic claims new victims: prestigious medical journals</a>.” The Wall Street Journal accused the Lancet of “<a href="https://www.wsj.com/articles/the-lancets-politicized-science-on-antimalarial-drugs-11591053222?mod=searchresults&page=1&pos=9">politicized science</a>,” and the Los Angeles Times claimed that the retracted papers “<a href="https://www.latimes.com/business/story/2020-06-08/coronavirus-retracted-paper">contaminated global coronavirus research</a>.” </p>
<p>These headlines may have merit, but perspective is also needed. <a href="https://doi.org/10.1126/science.aav8384">Retractions are rare</a> – only about 0.04% of published papers are withdrawn – but scrutiny, update and correction are common. It is how science is supposed to work, and it is happening in all areas of research relating to SARS-CoV-2. </p>
<p>Doctors have learned that the disease <a href="https://doi.org/10.1126/science.abc3208">targets numerous organs</a>, not just the lungs as was initially thought. Scientists are still working on understanding whether COVID-19 patients <a href="https://doi.org/10.1038/s41591-020-0897-1">develop immunity</a> to the disease.
As for hydroxychloroquine, <a href="https://doi.org/doi:10.1126/science.abd2496">three new large studies</a> published after the Lancet retraction indicate that the malaria drug is indeed ineffective in preventing or treating COVID-19. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=874&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=874&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=874&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1098&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1098&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345366/original/file-20200702-111318-a1ncb4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1098&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Since the beginning of scientific publishing, peer review has helped weed out bad science, but public discourse between researchers has easily played as big a role.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Nature_cover,_November_4,_1869.jpg">Public Domain</a></span>
</figcaption>
</figure>
<h2>Science is self-correcting</h2>
<p>Before a paper is published, it undergoes peer review by experts in the field who recommend to the journal editor whether it should be accepted for publication, rejected or reconsidered after modification. The reputation of the journal is dependent on high-quality peer review, and once a paper is published, it is in the public domain, where it can then be evaluated and judged by other scientists. </p>
<p>The publication of the Lancet and the New England Journal of Medicine papers failed at the level of peer review. But scrutiny by the scientific community – likely spurred on by the public spotlight on coronavirus research – caught the mistakes in record time.</p>
<p>The hydroxychloroquine article published in The Lancet was retracted only 13 days after it was published. By contrast, it took 12 years for the Lancet to retract the fraudulent article that <a href="https://doi.org/10.1016/S0140-6736(97)11096-0">incorrectly claimed vaccinations cause autism</a>.</p>
<p>It is not yet known whether these papers involved deliberate scientific misconduct, but mistakes and corrections are common, even for top scientists. For example, <a href="https://www.nobelprize.org/prizes/chemistry/1954/pauling/facts/">Linus Pauling</a>, who won the Nobel Prize for discovering the structure of proteins, later published an <a href="https://doi.org/10.1073/pnas.39.2.84">incorrect structure of DNA</a>. It was subsequently corrected by <a href="https://doi.org/10.1038/171737a0">Watson and Crick</a>. Mistakes and corrections are a hallmark of progress, not foul play.</p>
<p>Importantly, these errors were exposed by other scientists. They were not uncovered by some policing body or watchdog group. </p>
<p>This back-and-forth between academics is foundational to science. There is no reason to believe that scientists are more virtuous than anyone else. Rather, the mundane human traits of curiosity, competitiveness, self-interest and reputation come into play before and after publication are what allow science to regulate itself. A model based on robust evidence emerges while the weaker one is abandoned.</p>
<h2>Living with uncertainty</h2>
<p>From high school classes and textbooks, science seems like a body of well-known facts and principles that are straightforward and incontrovertible. These sources view science in hindsight and often make discoveries seem inevitable, even dull. </p>
<p>In reality, scientists learn as they go. Uncertainty is inherent to the path of discovery, and success is not guaranteed. <a href="https://doi.org/10.1093/biostatistics/kxx069">Only 14% of drugs and therapies</a> that go through human clinical trials ultimately win FDA approval, with less than a 4% success rate for cancer drugs. </p>
<p>The process of science generally takes place below the radar of public awareness, and so this uncertainty is not generally in view. However, Americans are <a href="https://www.journalism.org/2020/04/29/1-americans-are-turning-to-media-government-and-others-for-covid-19-news/">paying close attention</a> to the COVID-19 pandemic, and many are, for the first time, seeing the sausage as it is being made. </p>
<p>Although the recent retractions may be unappetizing, medical science has been very successful over the long run. Smallpox has been eradicated, infections are treated with antibiotics rather than amputation and pain management during surgery has advanced well beyond biting on a stick. </p>
<p>The system is by no means perfect, but it is pretty darned good.</p>
<p><em>This story was edited on July 9 to more precisely describe the state of hydroxychloroquine research.</em></p><img src="https://counter.theconversation.com/content/140326/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mark R. O'Brian receives funding from the National Institutes of Health</span></em></p>Severe scrutiny of two major papers, including one about the effectiveness of hydroxychloroquine, is part of science’s normal process of self-correction.Mark R. O'Brian, Professor and Chair of Biochemistry, Jacobs School of Medicine and Biomedical Sciences, University at BuffaloLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1305682020-02-11T13:53:48Z2020-02-11T13:53:48ZWhy sequencing the human genome failed to produce big breakthroughs in disease<figure><img src="https://images.theconversation.com/files/314042/original/file-20200206-43089-1b0x437.jpg?ixlib=rb-1.1.0&rect=17%2C8%2C5973%2C3440&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Early proponents of genome sequencing made misleading predictions about its potential in medicine.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/innovative-science-medicine-concept-design-576826981">Natali_ Mis/Shutterstock.com</a></span></figcaption></figure><p>An emergency room physician, initially unable to diagnose a disoriented patient, finds on the patient a wallet-sized card providing access to his genome, or all his DNA. The physician quickly searches the genome, diagnoses the problem and sends the patient off for a gene-therapy cure. That’s what a Pulitzer prize-winning <a href="https://www.latimes.com/archives/la-xpm-1996-03-03-tm-42636-story.htm">journalist imagined</a> 2020 would look like when she reported on the Human Genome Project back in 1996.</p>
<h2>A new era in medicine?</h2>
<p>The Human Genome Project was an international scientific collaboration that successfully mapped, sequenced and made publicly available the genetic content of human chromosomes – or all human DNA. Taking place between 1990 and 2003, the project caused many to speculate about the future of medicine. In 1996, Walter Gilbert, a Nobel laureate, <a href="https://www.latimes.com/archives/la-xpm-1996-03-03-tm-42636-story.html">said</a>, “The results of the Human Genome Project will produce a tremendous shift in the way we can do medicine and attack problems of human disease.” In 2000, Francis Collins, then head of the HGP at the National Institutes of Health, <a href="https://web.ornl.gov/sci/techresources/Human_Genome/project/clinton3.shtml">predicted</a>, “Perhaps in another 15 or 20 years, you will see a complete transformation in therapeutic medicine.” The same year, President Bill Clinton <a href="https://www.cnn.com/2000/HEALTH/06/26/human.genome.05/index.html">stated</a> the Human Genome Project would “revolutionize the diagnosis, prevention and treatment of most, if not all, human diseases.”</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=389&fit=crop&dpr=1 600w, https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=389&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=389&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=488&fit=crop&dpr=1 754w, https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=488&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/314040/original/file-20200206-43113-hf9gqo.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=488&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">President Clinton, flanked by J. Craig Venter, left, and Francis Collins, right, announces the completion of a rough draft of the human genome on June 26, 2000.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Associated-Press-Domestic-News-Dist-of-Columbi-/570adf5762e5da11af9f0014c2589dfb/17/0">AP Photo/Rick Bowmer</a></span>
</figcaption>
</figure>
<p>It is now 2020 and no one carries a genome card. Physicians typically do not examine your DNA to diagnose or treat you. Why not? As I explain in a recent <a href="https://doi.org/10.1080/01677063.2019.1706093">article in the Journal of Neurogenetics</a>, the causes of common debilitating diseases are complex, so they typically are not amenable to simple genetic treatments, despite the hope and hype to the contrary.</p>
<h2>Causation is complex</h2>
<p>The idea that a single gene can cause common diseases has been around for several decades. In the late 1980s and early 1990s, high-profile scientific journals, including Nature and JAMA, announced single-gene causation of <a href="https://doi.org/10.1038/325783a0">bipolar disorder</a>, <a href="https://doi.org/10.1038/336164a0">schizophrenia</a> and <a href="https://doi.org/10.1001/jama.1990.03440150063027">alcoholism</a>, among other conditions and behaviors. These articles drew <a href="https://www.washingtonpost.com/archive/politics/1987/02/26/manic-depression-gene-found/16b6f549-127c-44ed-8b75-75fcf52f60b9/">massive attention</a> in the <a href="https://www.nytimes.com/1990/04/18/us/scientists-see-a-link-between-alcoholism-and-a-specific-gene.html">popular media</a>, but were <a href="https://doi.org/10.1038/342238a0">soon</a> <a href="https://doi.org/10.1038/ng0193-49">retracted</a> <a href="https://doi.org/10.1038/325806a0">or</a> <a href="https://doi.org/10.1038/336167a0">failed</a> <a href="https://doi.org/10.1001/jama.1991.03470130081033">attempts</a> <a href="https://doi.org/10.1001/jama.1993.03500130087038">at</a> <a href="https://doi.org/10.1002/ajmg.1320540202">replication</a>. These reevaluations completely undermined the initial conclusions, which often had relied on <a href="https://doi.org/10.1016/0166-2236(93)90003-5">misguided statistical tests</a>. Biologists were generally aware of these developments, though the follow-up studies received little attention in popular media.</p>
<p>There are indeed individual gene mutations that cause devastating disorders, such as <a href="https://doi.org/10.1038/306234a0">Huntington’s disease</a>. But most common debilitating diseases are not caused by a mutation of a single gene. This is because people who have a debilitating genetic disease, on average, do not survive long enough to have numerous healthy children. In other words, there is strong evolutionary pressure against such mutations. Huntington’s disease is an exception that endures because it typically does not produce symptoms until a patient is beyond their reproductive years. Although new mutations for many other disabling conditions occur by chance, they don’t become frequent in the population. </p>
<p>Instead, most common debilitating diseases are caused by combinations of mutations in many genes, each having a very small effect. They interact with one another and with environmental factors, modifying the production of proteins from genes. The many kinds of microbes that live within the human body can play a role, too. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/314055/original/file-20200206-43079-1u32hbe.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A silver bullet genetic fix is still elusive for most diseases.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/little-girl-hospital-91838105">drpnncpptak/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>Since common serious diseases are rarely caused by single-gene mutations, they cannot be cured by replacing the mutated gene with a normal copy, the premise for gene therapy. <a href="https://doi.org/10.1126/science.aan4672">Gene therapy</a> has gradually progressed in research along a very bumpy path, which has included accidentally causing <a href="https://doi.org/10.1016/j.ymthe.2006.03.001">leukemia</a> and <a href="https://doi.org/10.1093/jnci/92.2.98">at least one death</a>, but doctors recently have been successful treating <a href="https://doi.org/10.1126/science.aan4672">some rare diseases</a> in which a single-gene mutation has had a large effect. Gene therapy for rare single-gene disorders is likely to succeed, but must be tailored to each individual condition. The enormous cost and the relatively small number of patients who can be helped by such a treatment may create insurmountable financial barriers in these cases. For many diseases, gene therapy may never be useful.</p>
<h2>A new era for biologists</h2>
<p>The Human Genome Project has had an enormous impact on almost every field of biological research, by spurring technical advances that facilitate fast, precise and relatively inexpensive sequencing and manipulation of DNA. But these advances in research methods have not led to dramatic improvements in treatment of common debilitating diseases. </p>
<p>Although you cannot bring your genome card to your next doctor’s appointment, perhaps you can bring a more nuanced understanding of the relationship between genes and disease. A more accurate understanding of disease causation may insulate patients against unrealistic stories and false promises.</p>
<p>[ <em>You’re smart and curious about the world. So are The Conversation’s authors and editors.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=youresmart">You can read us daily by subscribing to our newsletter</a>. ]</p><img src="https://counter.theconversation.com/content/130568/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ari Berkowitz receives funding from the National Science Foundation.</span></em></p>Genome sequencing technologies have transformed biological research in many ways, but have had a much smaller effect on the treatment of common diseases.Ari Berkowitz, Presidential Professor of Biology; Director, Cellular & Behavioral Neurobiology Graduate Program, University of OklahomaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1208812019-07-25T03:13:58Z2019-07-25T03:13:58ZFudged research results erode people’s trust in experts<figure><img src="https://images.theconversation.com/files/285649/original/file-20190725-110170-1342avj.jpg?ixlib=rb-1.1.0&rect=0%2C262%2C3898%2C2372&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">'Only a modest proportion of all flawed publications are identified and retracted.'</span> <span class="attribution"><span class="source">Shutterstock/Triff </span></span></figcaption></figure><p>Reports of <a href="https://www.theage.com.au/national/bad-science-australian-studies-found-to-be-unreliable-compromised-20190719-p528ql.htm">research misconduct</a> have been prominent recently and probably reflect wider problems of relying on dated integrity protections. </p>
<p>The recent reports are from <a href="https://retractionwatch.com">Retraction Watch</a>, which is a blog that reports on the withdrawal of articles by academic journals. The site’s <a href="http://retractiondatabase.org/">database</a> reports that journals have withdrawn a total of 247 papers with an Australian author going back to the 1980s. </p>
<p>This compares with 324 papers withdrawn with Canadian authors, 582 from the UK and 24 from New Zealand. Australian retractions are 1.2% of all retractions reported on the site, a fraction of Australia’s 4% share of all research publications.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-isnt-broken-but-we-can-do-better-heres-how-95139">Science isn't broken, but we can do better: here's how</a>
</strong>
</em>
</p>
<hr>
<p>Australian retractions have fallen from around 25 a year when Retraction Watch was launched in 2010 to an average of 11 in each of 2018 and 2017. This is in line with all retractions falling from 5,108 in 2010 to an average of 660 in the last two years. (However 2010 saw an usually high number of retractions, so this downward trend may not be as stark as the figures suggest.) </p>
<h2>Scratching the surface</h2>
<p>But this is not a good indication of trends in research cheating. Probably only a modest proportion of all flawed publications are <a href="https://theconversation.com/clearing-the-air-why-more-retractions-are-good-for-science-6008">identified and retracted</a>. There can be a delay of several years before research problems are found, reported and verified.</p>
<p>Common reasons for retracting articles include duplicating results, errors in results or analysis, and plagiarism. Other reasons include falsification or fabrication of data, research misconduct and unreliable research. </p>
<p>But not all retractions are for nefarious reasons. One publisher <a href="http://retractionwatch.com/2018/01/25/good-news-able-retract-article-journal/">retracted a paper</a> because it had through misunderstanding published the wrong version of the paper on its website. It promptly published the right version once its mistake had been noticed.</p>
<p>Research misconduct is a <a href="https://theconversation.com/scientific-fraud-sloppy-science-yes-they-happen-13948">long-standing problem</a>. The <a href="https://theconversation.com/behind-closed-doors-what-the-piltdown-man-hoax-from-1912-can-teach-science-today-76967">Piltdown Man hoax</a> in which the hoaxer claimed to have discovered the “missing link” between apes and humans was perpetrated back in 1912. </p>
<p>And research misconduct can have serious consequences. Opposition to vaccination is bolstered by a <a href="https://theconversation.com/mondays-medical-myth-the-mmr-vaccine-causes-autism-3739">fraudulent paper</a> claiming a link between the measles, mumps and rubella vaccine and autism and bowel disease, which was published in 1998.</p>
<p>It is hard to argue for evidence-based <a href="https://theconversation.com/the-death-of-evidence-in-education-policy-27505">policy</a> and <a href="https://theconversation.com/evidence-based-medicine-is-broken-why-we-need-data-and-technology-to-fix-it-29625">practice</a> if there are serious doubts about the quality of the evidence. Even where there is no cheating, confidence in all research is undermined by the <a href="https://theconversation.com/the-replication-crisis-has-engulfed-economics-49202">replication crisis</a> in which researchers can’t repeat earlier findings. </p>
<h2>Misconduct undermines science</h2>
<p>Reports of research misconduct strengthen the <a href="https://theconversation.com/inoculating-against-science-denial-40465">denial of science</a> and undermine the argument for taking concerted action to manage <a href="https://theconversation.com/why-old-school-climate-denial-has-had-its-day-117752">climate change</a> and other problems. </p>
<p>Reports of <a href="https://theconversation.com/fabricating-and-plagiarising-when-researchers-lie-33732">research misconduct</a> are coinciding with reports of student <a href="https://theconversation.com/policing-plagiarism-could-make-universities-miss-the-real-problems-45172">academic misconduct</a>, mostly <a href="https://theconversation.com/au/topics/plagiarism-3644">plagiarism</a> and <a href="https://theconversation.com/when-does-getting-help-on-an-assignment-turn-into-cheating-120215">contract cheating</a> where a student pays another to complete their assessment.</p>
<p>The stresses on <a href="https://theconversation.com/publish-or-perish-culture-encourages-scientists-to-cut-corners-47692">researcher</a> and <a href="https://theconversation.com/policing-plagiarism-could-make-universities-miss-the-real-problems-45172">student</a> malefactors are probably similar. They are the pressures to succeed in an increasing competitive environment. As societies become more unequal there are much higher rewards for apparent success and bigger penalties for failure.</p>
<p>Academic misconduct also likely reflects the contemporary conditions of universities, which are increasingly expected to be <a href="https://theconversation.com/universities-run-as-businesses-cant-pursue-genuine-learning-43402">businesslike</a> in managing their greatly increased resources.</p>
<p>Much higher student-to-staff ratios make it much harder to develop the close and supportive relations between teachers and students that discourage <a href="https://theconversation.com/assessment-design-wont-stop-cheating-but-our-relationships-with-students-might-76394">cheating</a>.</p>
<h2>Tackling the problem</h2>
<p>Universities are responding by increasing their <a href="https://theconversation.com/policing-wont-be-enough-to-prevent-pay-for-plagiarism-42999">regulation and oversight</a> of academic activities. They are using <a href="https://theconversation.com/carrot-or-the-stick-technology-and-university-plagiarism-9802">software to detect cheating</a>, but this is not a <a href="https://theconversation.com/universities-must-stop-relying-on-software-to-deal-with-plagiarism-113487">full solution</a>.</p>
<p>The Australian government established the <a href="https://www.arc.gov.au/policies-strategies/strategy/australian-research-integrity-committee-aric">Australian Research Integrity Committee</a> in 2011, but it <a href="https://theconversation.com/cracking-the-code-of-ethical-research-practices-1925">has limitations</a>. It reviews institutions’ responses to allegations of research misconduct, which in turn integrate institutions’ academic integrity policies, employees’ contracts and institutions’ disciplinary policies and enterprise agreements.</p>
<p>The fundamental problem is that the traditional systems for ensuring researcher and student integrity are based on the trust that is built from personal interactions within a much smaller system. They are also based on the volunteerism of the less formal traditional scholarly community for refereeing grant applications and journal submissions. </p>
<p>More systematic and rigorous processes will probably be needed as higher education transitions to universal access and wholesale research. As processes are formalised the costs of refereeing submissions may no longer be hidden in experts’ voluntary contributions to their community of scholars. They are likely to be professionalised, and costed and paid for explicitly.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/dependent-and-vulnerable-the-experiences-of-academics-on-casual-and-insecure-contracts-118608">Dependent and vulnerable: the experiences of academics on casual and insecure contracts</a>
</strong>
</em>
</p>
<hr>
<p>Academics’ relations with most undergraduate students and with most researchers will become more formal. Sadly, academics will likely have to trust less their students and other researchers. Even collaborating researchers have been caught by the failures of co-authors in the parts of the research and writing they were responsible for.</p>
<p>Conversely, much of what students and researchers have done on trust will have to be checked by new systems introduced by institutions, research granting bodies and publishers. Most students and researchers will experience far more bureaucracy.</p>
<p>The failures of academic integrity of students and researchers may be only a fraction of all work by scholars. But they so corrode trust in academic qualifications and publications that stringent measures will be needed to protect academic integrity. Hopefully those measures will be more preventive than punitive.</p>
<hr>
<p><em>Correction: this article originally incorrectly calculated the percentage of Australian papers on the Retraction Watch site. It has also been updated to give some context around the downward trend for all papers on the site.</em></p><img src="https://counter.theconversation.com/content/120881/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Gavin Moodie receives funding from the Social Sciences and Humanities Research Council of Canada and has benefited from traditional scholarly practices as a student, researcher, author, reviewer and editor. </span></em></p>A database of retractions shows hundreds of academic articles with Australian authors have been withdrawn. Research misconduct threatens to corrode trust in academic qualifications and publications.Gavin Moodie, Adjunct professor, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/651602017-01-03T20:13:53Z2017-01-03T20:13:53ZHow to quickly spot dodgy science<figure><img src="https://images.theconversation.com/files/150058/original/image-20161214-18876-es15do.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Is that a black hole, or a hole in their data?</span> <span class="attribution"><span class="source">NASA Goddard Space Flight Center</span>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>I haven’t got time for science, or at least not all of it. I cannot read <a href="http://esoads.eso.org/cgi-bin/nph-abs_connect?db_key=AST&db_key=PRE&qform=AST&arxiv_sel=astro-ph&sim_query=YES&ned_query=YES&adsobj_query=YES&aut_logic=AND&obj_logic=OR&author=&object=&start_mon=1&start_year=2015&end_mon=12&end_year=2015&ttl_logic=OR&title=&txt_logic=OR&text=&nr_to_return=200&start_nr=1&jou_pick=ALL&ref_stems=ApJ%2C+A%26A%2C+AJ%2C+MNRAS%2C+PASA%2C+PASP%2C+PASJ&data_and=ALL&group_and=ALL&start_entry_day=&start_entry_mon=&start_entry_year=&end_entry_day=&end_entry_mon=&end_entry_year=&min_score=&sort=CITATIONS&data_type=SHORT&aut_syn=YES&ttl_syn=YES&txt_syn=YES&aut_wt=1.0&obj_wt=1.0&ttl_wt=0.3&txt_wt=3.0&aut_wgt=YES&obj_wgt=YES&ttl_wgt=YES&txt_wgt=YES&ttl_sco=YES&txt_sco=YES&version=1">9,000</a> astrophysics papers every year. No way. </p>
<p>And I have little patience for bad science, which gets more media attention than it deserves. Even the bad science is overwhelming. <a href="http://retractionwatch.com/2016/03/24/retractions-rise-to-nearly-700-in-fiscal-year-2015-and-psst-this-is-our-3000th-post/">700 papers are retracted annually</a>, and that’s a gross underestimate of the bad science in circulation. </p>
<p>I, like most scientists, filter what I read using a few tricks for quickly rejecting bad science. Each trick isn’t foolproof, but in combination they’re rather useful. They can help identify bad science in just minutes rather than hours. </p>
<h2>Okay, this looks bad</h2>
<p>Good science is often meticulous and somewhat anxious. You discover something new or find something unexpected, and frankly you worry a lot about screwing up. Identifying and addressing what could plausibly go wrong, and then writing that up succinctly, takes time. Lots of time. Months. Even years. </p>
<p>If you’re taking the time to do meticulous science, why not take the time to prepare a good manuscript? Make nice-looking figures, proofread it a couple of times, and the like. It seems obvious enough, which is why a sloppy manuscript or poor grammar can be a warning sign of bad science. </p>
<p>Recently, <a href="https://arxiv.org/abs/1610.03031">Ermanno Borra and Eric Trottier</a> claimed to have detected “<a href="https://arxiv.org/abs/1610.03031">signals probably from extraterrestrial intelligence</a>”. I thought this was far-fetched, but still worth looking at the <a href="https://arxiv.org/pdf/1610.03031v1.pdf">paper preprint</a>. An immediate red flag for me was some blurry graphs, and figures with captions that weren’t on the same page. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"786034462601555968"}"></div></p>
<p>Was my caution justified? Well, as I dug into the paper more there were other warning signs. For example, the results relied on Fourier analysis, a mathematical method that can be powerful but is also notorious for picking up <a href="https://seti.berkeley.edu/bl_sdss_seti_2016.pdf">artefacts from scientific instruments and data processing</a>. </p>
<p>Furthermore, the surprising conclusions relied on a tiny subset of data, and there was <a href="https://seti.berkeley.edu/bl_sdss_seti_2016.pdf">no attempt to confirm</a> the conclusions with additional observations. If they were being meticulous, wouldn’t they have taken the time to collect more data and properly format their manuscript? I’m very sceptical of Borra and Trottier’s aliens, <a href="https://seti.berkeley.edu/bl_sdss_seti_2016.pdf">as are many of my colleagues</a>. </p>
<p>Of course, there are exceptions to good-looking good science. The announcement of the Higgs boson, which featured fantastic science, included slide designs that did not impress Vincent Connare, the creator of Comic Sans. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"220421676020678656"}"></div></p>
<p>To be honest, I’m with Connare on the <a href="http://www.theverge.com/2012/7/4/3136652/cern-scientists-comic-sans-higgs-boson">slides</a>. However, this is a reminder that tricks for quickly flagging bad science are imperfect shortcuts, not absolute rules. </p>
<h2>Obviously</h2>
<p>“That’s obvious, why didn’t someone think of that before?” </p>
<p>Well, perhaps someone did.</p>
<p>It has recently been claimed that the <a href="http://www.nature.com/articles/srep35596">expansion of the universe may not be accelerating</a>, which seems at odds with some <a href="https://www.nobelprize.org/nobel_prizes/physics/laureates/2011/">Nobel Prizewinning research</a>. That claim relies on a statistical analysis of supernova data. However, such analyses are nothing new. </p>
<p>Enter keywords into a <a href="http://adsabs.harvard.edu/cgi-bin/nph-abs_connect?db_key=AST&db_key=PRE&qform=AST&arxiv_sel=astro-ph&arxiv_sel=cond-mat&arxiv_sel=cs&arxiv_sel=gr-qc&arxiv_sel=hep-ex&arxiv_sel=hep-lat&arxiv_sel=hep-ph&arxiv_sel=hep-th&arxiv_sel=math&arxiv_sel=math-ph&arxiv_sel=nlin&arxiv_sel=nucl-ex&arxiv_sel=nucl-th&arxiv_sel=physics&arxiv_sel=quant-ph&arxiv_sel=q-bio&sim_query=YES&ned_query=YES&adsobj_query=YES&aut_logic=OR&obj_logic=OR&author=&object=&start_mon=&start_year=&end_mon=&end_year=&ttl_logic=AND&title=supernova+cosmology&txt_logic=OR&text=&nr_to_return=200&start_nr=1&jou_pick=ALL&ref_stems=&data_and=ALL&group_and=ALL&start_entry_day=&start_entry_mon=&start_entry_year=&end_entry_day=&end_entry_mon=&end_entry_year=&min_score=&sort=CITATIONS&data_type=SHORT&aut_syn=YES&ttl_syn=YES&txt_syn=YES&aut_wt=1.0&obj_wt=1.0&ttl_wt=0.3&txt_wt=3.0&aut_wgt=YES&obj_wgt=YES&ttl_wgt=YES&txt_wgt=YES&ttl_sco=YES&txt_sco=YES&version=1">search engine</a>, and you find many previous studies, but without the unexpected conclusions. That’s a red flag right there.</p>
<p>So what happened? Well, one can study up on supernovae or cosmology, but there are experts on Twitter providing succinct explanations and informed responses. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"790693490145370113"}"></div></p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"790844700840202240"}"></div></p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"790859922720247808"}"></div></p>
<p>In plain English, you can only claim that evidence for the accelerating expansion of the universe is marginal if you make make incorrect assumptions about supernova properties and brush aside other key observations. And thus facepalm. </p>
<p>Cosmologist Tamara Davis has <a href="https://theconversation.com/relax-the-expansion-of-the-universe-is-still-accelerating-67691">noted</a> that such omissions, accompanied by emphasis on a contrarian conclusion, tend to be misleading spin. Unfortunately, such omissions and erroneous assumptions <a href="https://theconversation.com/no-we-arent-heading-into-a-mini-ice-age-44677">turn up elsewhere too</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/150042/original/image-20161214-18885-14b9l6d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Did they just brush aside two decades of cosmology?</span>
<span class="attribution"><span class="source">Paramout</span></span>
</figcaption>
</figure>
<h2>Rank Journals</h2>
<p>You may be aware that some scientific journals come with a certain prestige. There are journal rankings, which typically place the journals <a href="http://www.nature.com/nature/index.html">Nature</a> and <a href="http://www.sciencemag.org/">Science</a> near the top, and university rankings often use papers in prestigious journals as <a href="http://www.nature.com/news/2010/100303/full/464016a.html">a proxy for quality</a>.</p>
<p>I don’t care much for journal rankings myself. Nature and Science chase blockbuster results, but this leads to them publishing a few too many wrong and <a href="http://www.sciencemag.org/news/2015/05/science-retracts-gay-marriage-paper-without-agreement-lead-author-lacour">even fraudulent results</a>. For example, the contrarian supernova paper was published in <a href="http://www.nature.com/articles/srep35596">Nature’s Scientific Reports</a>, which is an online, open access, multidisciplinary journal that is published by the same group as Nature.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=245&fit=crop&dpr=1 600w, https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=245&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=245&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=308&fit=crop&dpr=1 754w, https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=308&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/150055/original/image-20161214-18879-1yx1nl4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=308&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">How do you tell which of these journal articles can be trusted?</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>While I don’t care for journal rankings, I do care about <a href="http://dictionary.cambridge.org/dictionary/english/rank#british-1-2-3">rank</a> journals. If you submit your research to a decent journal, you have to assume you will (or could) get a meticulous editor and referee. That should force you to take some care with your research. However, if you know your paper will be accepted without proper peer review, then anything goes. </p>
<p>University of Colorado librarian <a href="https://scholarlyoa.com/about/">Jeffrey Beall</a> maintains a list of “<a href="https://scholarlyoa.com/publishers/">predatory publishers</a>” (<a href="http://archive.fo/6EByy">archived version here</a> if site is offline), which includes vanity academic publishers that provide a veneer of peer review. Effectively, it is Beall’s list of rank journals, and <a href="https://theconversation.com/vanity-and-predatory-academic-publishers-are-corrupting-the-pursuit-of-knowledge-45490">I treat papers in those journals with suspicion</a>. </p>
<p>So I wasn’t too surprised when a journal on Beall’s list published a paper promoting the <a href="http://retractionwatch.com/2016/07/18/author-loses-2nd-paper-on-supposed-dangers-of-chemtrails/">chemtrails conspiracy theory</a>. And I wasn’t surprised to find that the <a href="https://www.metabunk.org/debunked-j-marvin-herndons-geoengineering-articles-in-current-science-india-and-ijerph.t6456/">paper had serious failings</a>. For better and worse, my cynicism is justified all too often.</p>
<hr>
<p><em>This article expresses the individual views of the author, Michael Brown, and should not be considered representative of an official position from the Monash University School of Physics and Astronomy.</em></p><img src="https://counter.theconversation.com/content/65160/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael J. I. Brown receives research funding from the Australian Research Council and Monash University, and has developed space-related titles for Monash University's MWorld educational app.
</span></em></p>There are a few red flags to look out for when reading about new scientific discoveries that can help you spot dodgy or unreliable work.Michael J. I. Brown, Associate professor, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/476922015-09-22T20:14:43Z2015-09-22T20:14:43ZPublish or perish culture encourages scientists to cut corners<figure><img src="https://images.theconversation.com/files/95507/original/image-20150921-31513-oij8ym.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Which of these researchers has fudged their results to get ahead?</span> <span class="attribution"><span class="source">wavebreakmedia</span></span></figcaption></figure><p>Last week there was another very public case of <a href="http://www.abc.net.au/news/2015-09-17/high-profile-researcher-admits-to-fabricating-scientific-results/6781958">a journal article being retracted</a> as a result of <a href="http://jama.jamanetwork.com/article.aspx?articleid=2442406">academic misconduct</a>. This time it was in the Journal of the American Medical Association (<a href="http://jama.jamanetwork.com/journal.aspx">JAMA</a>), with the lead author – Dr Anna Ahimastos, working at Melbourne’s Baker IDI – reportedly admitting she fabricated data.</p>
<p>Sadly, the story is <a href="https://theconversation.com/the-train-wreck-continues-another-social-science-retraction-42404">all-too familiar</a>. But this is not to say that science is imperiled, only that we need to ensure the reward and support structures in academia promote the best practices rather than corner cutting.</p>
<p>We have only recently begun looking closely at how the scientific literature could function better, and what can go wrong. And there are conflicting opinions on how to handle underlying problems. </p>
<p><a href="http://www.senseaboutscience.org/pages/peer-review.html">Peer review</a> is currently the primary tool we have for assessing papers prior to publication. Although it has its strengths, especially when overseen by skilled editors, it can’t pick up all instances of fraud or sloppy scientific practices.</p>
<p>In the past these errors may have lain hidden for many years, or never come to light. Now, post publication scrutiny is picking up more and more papers with questionable data. This is leading to corrections, or even <a href="http://publicationethics.org/files/retraction%20guidelines_0.pdf">retractions</a>. Websites such as <a href="http://retractionwatch.com/">Retraction Watch</a> have sprung up to document these retractions.</p>
<h2>Peerless research</h2>
<p>To non-academics, this might all seem rather surprising. Isn’t science governed by strict protocols for performing and reporting research? </p>
<p>Well, no. Unlike industrial processes, for example, which have standard operating procedures and oversight, science is usually organised locally. Expert laboratory heads typically have the responsibility for the oversight of their laboratories’ work. </p>
<p>Many laboratories work as part of larger collaborations, which may have their own checks and balances in place, as do the academic institutions to which they belong. Even so, ultimately the researchers and individual laboratories are responsible for their own work. </p>
<p>The medical sciences have developed their own standards of <a href="http://www.equator-network.org/">reporting studies</a>, including <a href="http://www.consort-statement.org/">clinical trials</a>. But even these standards are not employed universally.</p>
<p>The system of rewards within science is possibly even more perplexing. Academia is a highly competitive profession. The basic training in science is a PhD, with more than 6,000 awarded each year in Australia alone, which is many more than can ever end up as career researchers, even at the lowest level. </p>
<p>The situation gets worse the more senior a researcher gets. According to a 2013 discussion document <a href="https://go8.edu.au/sites/default/files/docs/the-changing-phd_final.pdf">less than 5%</a> of those who were originally awarded PhDs find permanent academic positions. Even these senior researchers rarely have permanent positions, but are instead expected to compete for funding every few years. </p>
<p>And the primary way academics compete is in the number of papers they publish in peer reviewed journals, especially the handful of what are considered to be top journals, such as <a href="http://www.sciencemag.org/">Science</a>, http://www.nature.com/<a href="http://example.com/">link text</a> and <a href="http://www.thelancet.com/">The Lancet</a>.</p>
<h2>Under pressure</h2>
<p>Why does this all matter? Doesn’t this competition lead to selection of the best of the best in research and a faster pace of advancement of science? In fact, the reverse may be the case. </p>
<p>In a seminal paper published in 2005, provocatively titled <a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124">Why Most Published Research Findings are False</a>, John Ioannidis discussed a number of reasons why research may be unreliable. One finding was that papers in highly competitive areas were more likely to be false than papers in less competitive fields. </p>
<p>In 2014, the Nuffield Council on Bioethics probed these issues among UK researchers in a year long <a href="http://nuffieldbioethics.org/project/research-culture/">study</a>. What they found was alarming. </p>
<p>Researchers stated that there was strong pressure on them to publish in a limited number of top journals, “resulting in important research not being published, disincentives for multidisciplinary research, authorship issues, and a lack of recognition for non-article research outputs”. Even worse was that the need to get into these top journals led to “scientists feeling tempted or under pressure to compromise on research integrity and standards”.</p>
<p>What can be done? Increasingly, groups of scientists are coming together to develop standards in <a href="http://researchwaste.net/research-wasteequator-conference/">reporting, conduct</a> and <a href="https://osf.io/e81xl/wiki/home/">reproducibility</a>. Organisations such as the Committee on Publication Ethics (<a href="http://publicationethics.org/">COPE</a>), which I chair, advise editors on how to handle problem papers. </p>
<p>Perhaps most interestingly, a number of <a href="https://innoscholcomm.silk.co/">technological innovations</a> have arisen that could lead to more reliable science, if adopted widely. Probably the most important innovation is that of Open Science, i.e. <a href="http://aoasg.org.au/faq-about-open-access/">open access</a> to research publications, and open access to the data and methodology that underpins those publications. </p>
<p>But we also need to develop ways to reward scientists who do make their publications, data and methodology open for scrutiny, and don’t just pursue publication in top journals. </p>
<p>Research data organisations, such as the Australian National Data Service (<a href="http://ands.org.au/">ANDS</a>), are developing the infrastructure for systematic and standardised ways of linking to data, but as yet funders and institutions do not routinely reward such behaviour.</p>
<p>In the end, science is a human endeavour. And like humans everywhere, those who work in it will do what they are rewarded for, for better or for worse. So we need to make sure those reward structures are encouraging good quality research, not the opposite.</p><img src="https://counter.theconversation.com/content/47692/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Virginia Barbour is Chair of the Committee on Publication Ethics. She was previously Chief Editor, PLOS Medicine.</span></em></p>Researchers who feel pressured to publish in high ranking journals are more likely to cut corners, or even commit academic fraud.Virginia Barbour, Executive Officer, Australasian Open Access Support Group, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/451492015-07-28T10:22:13Z2015-07-28T10:22:13ZHalf of biomedical research studies don’t stand up to scrutiny – and what we need to do about that<figure><img src="https://images.theconversation.com/files/89833/original/image-20150727-7646-x6278p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How much of the research in these journals could be reproduced?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/yeaki/6961051384">Tobias von der Haar</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>What if I told you that half of the studies published in scientific journals today – the ones upon which news coverage of medical advances is often based – won’t hold up under scrutiny? You might say I had gone mad. No one would ever tolerate that kind of waste in a field as important – and expensive, to the tune of roughly <a href="http://officeofbudget.od.nih.gov/pdfs/FY15/FY2015_Overview.pdf">US$30 billion in federal spending per year</a> – as biomedical research, right? After all, this is the crucial work that hunts for explanations for diseases so they can better be treated or even cured.</p>
<p>Wrong. The rate of what is referred to as “irreproducible research” – more on what that means in a moment – exceeds 50%, <a href="http://dx.doi.org/10.1371/journal.pbio.1002165">according to a recent paper</a>. Some <a href="http://dx.doi.org/10.1371/journal.pmed.0020124">estimates are even higher</a>. In one analysis, just <a href="http://dx.doi.org/10.1038/483531a">11% of preclinical cancer research studies could be confirmed</a>. That means that an awful lot of “promising” results aren’t very promising at all, and that a lot of researchers who could be solving critical problems based on previously published work end up just spinning their wheels.</p>
<p>So what gives? And how can we fix this problem?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=377&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=377&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=377&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=473&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=473&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89835/original/image-20150727-7662-j5cbjp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=473&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hmmm, I didn’t expect those results….</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-64872892/stock-photo-chemistry-recipient-with-ink-color-inside.html">Test tubes image via www.shutterstock.com</a></span>
</figcaption>
</figure>
<h2>What worms tell us about reproducibility</h2>
<p>Although definitions of reproducibility and replication vary somewhat, for a study to be reproducible, another researcher needs to be able to replicate it, meaning use the same data and analysis to come to the same conclusions. There are lots of reasons why a study may not pass the replication test, from flat-out errors to a failure to adequately describe the methodology used. A researcher may have forgotten about a step in the process when he wrote up the methodology, for example, counted data in the wrong category, or written the wrong code for her statistics program.</p>
<p><a href="https://theconversation.com/clearing-the-air-why-more-retractions-are-good-for-science-6008">Faking results</a> is another reason, but it’s not nearly as common as others. Out-and-out fraud like that, or suspected fraud, is the reason for a bit <a href="http://dx.doi.org/10.1073/pnas.1212247109">fewer than half of the 400-plus retractions per year</a>. But there are something like two million papers published annually, so the vast majority of studies containing irreproducible data are never retracted. And most scientists would agree that they shouldn’t be; after all, most science is overturned one way or another over time. Retraction should be reserved for the most severe cases. That doesn’t mean irreproducible papers shouldn’t be somehow marked, though.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89838/original/image-20150727-7646-utfyxu.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A girl takes her deworming tablet.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/savethechildrenusa/7051746493">Save the Children</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>Here’s a fresh example of a study that turned out not to be reproducible, because the results couldn’t be replicated: as <a href="http://www.buzzfeed.com/bengoldacre/deworming-trials">Ben Goldacre relates in BuzzFeed</a>, two economists published a <a href="http://dx.doi.org/10.1111/j.1468-0262.2004.00481.x">massive study in 2004</a> claiming that a “deworm everyone” approach in Kenya “improved children’s health, school performance, and school attendance,” even among children several miles away who didn’t get deworming pills. <a href="http://www.who.int/elena/titles/deworming/en/">Endorsed by the World Health Organization</a>, it helped set policy that affects hundreds of millions of children annually in the developing world.</p>
<p>But now researchers have published <a href="http://dx.doi.org/10.1093/ije/dyv127">papers</a> describing two <a href="http://dx.doi.org/10.1093/ije/dyv128">failures</a> to replicate the original findings. Many of them just didn’t hold up, although some did.</p>
<p>That, as Goldacre explains, “is definitely problematic.” But the reanalyses were possible only because the original authors “had the decency, generosity, strength of character, and intellectual confidence to let someone else peer under the bonnet” – a <a href="http://dx.doi.org/10.1001/jama.2014.9646">rare situation indeed</a>.</p>
<h2>The fixes</h2>
<p>Researchers are aware of the reproducibility problem, and some are trying to fix it. In response to alarming findings about the reproducibility of <a href="http://dx.doi.org/10.1038/483531a">basic cancer research</a>, a program called the <a href="http://validation.scienceexchange.com/#/reproducibility-initiative">Reproducibility Initiative</a> has started providing “both a mechanism for scientists to independently replicate their findings and a reward for doing so.” It’s <a href="http://blog.scienceexchange.com/2012/08/the-reproducibility-initiative/">chosen 50 studies for independent validation</a> – or not, since there’s certainly a chance the initial results won’t be reproducible. Those working on the project will perform the same kind of analyses that researchers did in the worm study replications. A similar effort has been <a href="https://osf.io/ezcuj/wiki/home/">ongoing in psychology</a>, and other projects are under way in the <a href="http://www.bitss.org/">social sciences</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=335&fit=crop&dpr=1 600w, https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=335&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=335&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=421&fit=crop&dpr=1 754w, https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=421&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/89836/original/image-20150727-7668-esb8u0.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=421&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Research data need to be an open book.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/brenda-starr/5813347420">Brenda Clarke</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>All of these efforts will require scientists to share data, as the authors of the deworming study did. That has been a requirement in human studies for some years now, by <a href="http://www.nhlbi.nih.gov/research/funding/human-subjects/data-sharing">many funders</a>, and it’s <a href="http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html">encouraged by many journal editors</a>. And while it’s not met 100% of the time, <a href="http://dx.doi.org/10.1056/NEJMsa1409364">compliance is growing</a>. Some basic science journals are <a href="http://www.nytimes.com/2015/06/26/science/journal-science-releases-guidelines-for-publishing-scientific-studies.html?_r=0">moving to make it a requirement</a>, too.</p>
<p>Perhaps more important, however, is that researchers – and the public that funds many of them – realize that science is a process, and that all knowledge is provisional. “It’s not just naive to expect that all research will be perfectly free from errors,” writes Goldacre, “it’s actively harmful.” <a href="http://www.vox.com/2015/3/23/8264355/research-study-hype">Journalists, take note</a>.</p>
<p>Translated into policy, that means valuing replication efforts, which right now are essentially unfunded and hardly ever published. If we want scientists to validate others’ work, we’ll need to create grants to do that. That means digging up additional funding, but replicating a study costs a tiny fraction of what the original work does. Funding new studies based on those that turn out to be irreproducible…well, now that’s expensive.</p><img src="https://counter.theconversation.com/content/45149/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ivan Oransky, global editorial director of MedPage Today, is co-founder of Retraction Watch. Retraction Watch, through its parent organization, The Center For Scientific Integrity, is funded by a generous grant from the John D. and Catherine T. MacArthur Foundation.</span></em></p>It’s a problem when much of what winds up in scientific journals isn’t replicable, for various reasons. The research community is taking baby steps toward addressing the “reproducibility crisis.”Ivan Oransky, Distinguished Writer In Residence, Arthur Carter Journalism Institute, New York UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/430832015-06-11T11:25:33Z2015-06-11T11:25:33ZRetraction of scientific papers for fraud or bias is just the tip of the iceberg<figure><img src="https://images.theconversation.com/files/84558/original/image-20150610-6790-16w9et7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Taking a closer look at the details. </span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/dl2_lim.mhtml?license_type=standard&src=uvSW41N5REffZAG9gh1abw-1-43&id=184853708&size=medium_jpg&submit_jpg=">Small print by Shutterstock</a></span></figcaption></figure><p>Publishing clinical trials in medical journals can help doctors and scientists rise through the ranks of the research hierarchy. While most play the publication game fairly, some cheat. Whereas all misconduct undermines the public’s trust in science – such as the recent <a href="http://retractionwatch.com/2015/05/30/weekend-reads-gay-canvassing-study-saga-continues-elsevier-policy-sparks-concern-a-string-of-scandals/">retracted paper about gay canvassers</a> – health research scandals put the health of millions of patients around the world in jeopardy.</p>
<p>Professionals and patients depend on results from systematic reviews of clinical trials, which evaluate all the evidence on a particular issue, to know whether or not treatments are safe and effective. However, those of us who coordinate the preparation of Cochrane systematic reviews of treatments for seriously injured patients believe that these types of reviews can no longer be entirely trusted because of research misconduct and publication bias. An argument <a href="http://www.bmj.com/cgi/doi/10.1136/bmj.h2463">we recently made in the BMJ</a>.</p>
<h2>Falsified reports</h2>
<p>Most medical journal editors and systematic reviewers take clinical trial reports at face value with little or no effort to confirm whether a particular trial even took place. A Cochrane systematic review showing that the infusion of high-dose sugar solution prevents death after head injury, for example, <a href="http://www.bmj.com/content/334/7590/392">was later retracted</a> after our review editors were unable to confirm that any of the included trials took place.</p>
<p>As part of the investigation, the London School of Hygiene & Tropical Medicine editors contacted the editor of the journal that published one of the doubtful trials. His response? That there had been doubts about the data but that doubts were different from concluding that it was fabricated.</p>
<p>Similarly, the conclusions of a review of starch infusions in critically ill patients changed substantially after excluding seven entirely fabricated trials by <a href="http://www.reuters.com/article/2011/03/04/us-journals-retractions-idUSTRE7235J820110304">Joachim Boldt</a>. </p>
<p>Investigating fraud is hard work, and it is easier for journal editors to ignore the problem and perpetuate the myth that peer review of trial reports ensures their scientific quality. But how can systematic reviews claim to provide “trusted evidence” when all evidence is taken on trust?</p>
<h2>Bias in reviews</h2>
<p>The second major problem is that the medical literature contains a biased sample of clinical trials. Clinical trials showing that a particular treatment is effective are much <a href="https://theconversation.com/drug-companies-have-a-year-to-publish-their-data-or-well-do-it-for-them-15184">more likely to be published</a> than those showing no benefit or harm.</p>
<p>As a result, systematic reviews based on published research are biased – they emphasise the positive and eliminate the negative. Despite decades of exhortation about trial publication, around half of all clinical trials remain unpublished – and so even the most diligent efforts to synthesise the results from all relevant clinical trials are in vain.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/84559/original/image-20150610-6796-fstz9e.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Cherry picking the data.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-197881169/stock-photo-five-freshly-picked-bright-red-cherries-lined-up-on-a-wooden-picnic-table.html?src=GzQBuEMPzQg_NoZSnRA2gQ-2-64">Cherries by Shutterstock</a></span>
</figcaption>
</figure>
<p>Even when trials are published – information on side effects <a href="https://theconversation.com/drug-safety-relies-on-people-like-david-tackling-the-goliaths-of-big-pharma-14878">is often neglected</a> from published reports. Clinical trial information is too important to depend on the publication game – after all, patients’ lives depend on it. Reviews of research evidence should be based on the actual trial data obtained from compulsory clinical trial registers. These registers would specify in advance the information that is to be collected in the trial – and when the trial was completed they would provide the actual raw data. <a href="http://www.alltrials.net/">Providing all the information collected</a> would avoid the bias associated with publishing only some of the findings. </p>
<p>The public need to know what the trial found rather than a given doctor’s spin on the results – usually designed to make results look more interesting or newsworthy.</p>
<p>However, medical journals and publishers of systematic reviews have no incentive to change. They make good money putting a sophisticated gloss on whatever research manuscripts are available to them. When fraud scandals break they throw up their arms in horror but there is no structural change.</p>
<p>Although clinical trial research is heavily regulated, fabricated research is completely immune to this regulation since all it requires is a doctor who has lost the plot and has access to a computer. There is no shortage of medical journals who will publish – and with no questions asked. Progress will only be made when the general public becomes involved. </p>
<p>Accurate health research information is a public good that is as important as safe clean drinking water. We need a health research information system that delivers pure research information that is untainted by the career interests of scientists or the commercial interests of publishers.</p><img src="https://counter.theconversation.com/content/43083/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ian Roberts does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>After we were stung, we realised just how much of a threat misconduct and cherry picked data is to health.Ian Roberts, Professor of Epidemiology and Public Health, London School of Hygiene & Tropical MedicineLicensed as Creative Commons – attribution, no derivatives.