tag:theconversation.com,2011:/fr/topics/scientific-fraud-38339/articlesScientific fraud – The Conversation2023-12-14T13:16:36Ztag:theconversation.com,2011:article/2167092023-12-14T13:16:36Z2023-12-14T13:16:36ZWhen authoritative sources hold onto bad data: A legal scholar explains the need for government databases to retract information<figure><img src="https://images.theconversation.com/files/564733/original/file-20231211-15-sxj8oy.jpg?ixlib=rb-1.1.0&rect=0%2C5%2C3888%2C2578&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Government information sources like the U.S. patent database often file bad information without labeling it or providing a way to retract it.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/junk-drawer-label-royalty-free-image/485963757">Thinglass/iStock via Getty Images</a></span></figcaption></figure><p>In 2004, Hwang Woo-suk was celebrated for his breakthrough discovery creating <a href="https://doi.org/10.1126/science.1094515">cloned human embryos</a>, and his work was published in the prestigious journal Science. But the discovery <a href="https://www.nytimes.com/2009/10/27/world/asia/27clone.html?unlocked_article_code=1.D00.aGQ6.J19oSJ1JE6oX&smid=url-share">was too good to be true</a>; Dr. Hwang had fabricated the data. Science publicly retracted the article and assembled a team to <a href="https://doi.org/10.1126/science.1137840">investigate what went wrong</a>.</p>
<p>Retractions are frequently in the news. The high-profile discovery of a room-temperature superconductor <a href="https://www.wsj.com/science/superconductor-paper-retracted-journal-nature-ranga-dias-c437ce6e">was retracted</a> on Nov. 7, 2023. A series of retractions <a href="https://www.washingtonpost.com/education/2023/07/19/stanford-university-marc-tessier-lavigne-research-controversy/">toppled the president</a> of Stanford University on July 19, 2023. Major early studies on COVID-19 were found to have <a href="https://doi.org/10.1126/science.abd1697">serious data problems</a> and retracted on June 4, 2020. </p>
<p>Retractions are generally framed as a negative: as science not working properly, as an embarrassment for the institutions involved, or as a flaw in the peer review process. They can be all those things. But they can also be part of a story of science working the right way: finding and correcting errors, and publicly acknowledging when information turns out to be incorrect.</p>
<p>A far more pernicious problem occurs when information is not, and cannot, be retracted. There are many apparently authoritative sources that contain flawed information. Sometimes the flawed information is deliberate, but sometimes it isn’t – after all, to err is human. Often, there is no correction or retraction mechanism, meaning that information known to be wrong remains on the books without any indication of its flaws. </p>
<p>As a <a href="https://scholar.google.com/citations?hl=en&user=SlW0VEkAAAAJ&view_op=list_works&sortby=pubdate">patent and intellectual property legal scholar</a>, I’ve found that this is a particularly harmful problem with government information, which is often considered a <a href="https://dx.doi.org/10.2139/ssrn.4372254">source of trustworthy data but is prone to error</a> and often lacking any means to retract the information.</p>
<h2>Patent fictions and fraud</h2>
<p>Consider patents, documents that contain many technical details that can be <a href="https://doi.org/10.1038/nbt.3864">useful to scientists</a>. There is <a href="https://doi.org/10.1162/rest_a_01353">no way to retract a patent</a>. And patents contain <a href="https://dx.doi.org/10.2139/ssrn.3538746">frequent errors</a>: Although patents are reviewed by an expert examiner before being granted, <a href="https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=5848&context=flr">examiners do not check</a> whether the scientific data in the patent is correct.</p>
<p>In fact, the U.S. Patent and Trademark Office permits patentees to include <a href="https://doi.org/10.1126/science.aax0748">fictional experiments and data</a> in patents. This practice, called <a href="https://doi.org/10.1126/science.aax0748">prophetic examples</a>, is common; about <a href="https://dx.doi.org/10.2139/ssrn.3202493">25% of life sciences patents contain fictional experiments</a>. The patent office requires that prophetic examples be written in the present or future tense while real experiments can be written in the past tense. But this is confusing to nonspecialists, including scientists, who tend to assume that a phrase like “X and Y are mixed at 300 degrees to achieve a 95% yield rate” indicates a real experiment. </p>
<p>Almost a decade after Science retracted the journal article claiming cloned human cells, <a href="https://www.nytimes.com/2014/02/15/science/disgraced-scientist-granted-us-patent-for-work-found-to-be-fraudulent.html?searchResultPosition=1">Dr. Hwang received a U.S patent</a> on his retracted discovery. Unlike the journal article, this patent has not been retracted. The patent office did not investigate the accuracy of the data – indeed, it granted the patent long after the data’s inaccuracy had been publicly acknowledged – and there is no indication on the face of the patent that it contains information that has been retracted elsewhere. </p>
<p>This is no anomaly. In a similar example, Elizabeth Holmes, the former – now imprisoned – CEO of Theranos, <a href="https://doi.org/10.1162/rest_a_01353">holds patents</a> on her thoroughly discredited claims for a small device that could rapidly run many tests on a small blood sample. Some of those patents were granted long after Theranos’ fraud headlined major newspapers.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="A document containing numbers and text" src="https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=773&fit=crop&dpr=1 600w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=773&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=773&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=972&fit=crop&dpr=1 754w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=972&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/564591/original/file-20231208-29-r2pqhk.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=972&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The U.S. Patent and Trademark Office granted a patent to Theranos on Dec. 18, 2018, three months after the company was dissolved following a series of investigations and lawsuits that detailed its fraud. The patent has not been rescinded and contains no notice of the faulty nature of the information it contains.</span>
<span class="attribution"><a class="source" href="https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/10156579">U.S. Patent and Trademark Office</a></span>
</figcaption>
</figure>
<h2>Long-lived bad information</h2>
<p>This sort of under-the-radar wrong data can be deeply misleading to readers. The system of retractions in scientific journals is not without its critics, but it compares favorably to the alternative of no retractions. Without retractions, readers don’t know when they are looking at incorrect information. </p>
<p>My colleague <a href="https://scholar.google.com/citations?hl=en&user=jYI7hFEAAAAJ&view_op=list_works&sortby=pubdate">Soomi Kim</a> and I conducted a study of patent-paper pairs. We looked at cases where the same information was published in a journal article and in a patent by the same scientists, and the journal paper had subsequently been retracted. We found that while citations to papers dropped steeply after the paper was retracted, there was <a href="https://doi.org/10.1162/rest_a_01353">no reduction in citations to patents</a> with the very same incorrect information. </p>
<p>This probably happened because scientific journals paint a big red “retracted” notice on retracted articles online, informing the reader that the information is wrong. By contrast, patents have no retraction mechanism, so incorrect information continues to spread.</p>
<p>There are many other instances where <a href="https://dx.doi.org/10.2139/ssrn.4372254">authoritative-looking information is known to be wrong</a>. The Environmental Protection Agency publishes emissions data supplied by companies but not reviewed by the agency. Similarly, the Food and Drug Administration disseminates official-looking information about drugs that is generated by drug manufacturers and posted without an evaluation by the FDA.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/3jYqgTKVQGE?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Retractions play an important role in science.</span></figcaption>
</figure>
<h2>Consequences of nonretractions</h2>
<p>There are also economic consequences when incorrect information can’t be easily corrected. The Food and Drug Administration publishes <a href="https://www.fda.gov/drugs/drug-approvals-and-databases/approved-drug-products-therapeutic-equivalence-evaluations-orange-book">a list of patents</a> that cover brand-name drugs. The FDA won’t approve a generic drug unless the generic manufacturer has shown that each patent that covers the drug in question is expired, not infringed or invalid.</p>
<p>The problem is that the list of patents is <a href="https://dx.doi.org/10.2139/ssrn.4372254">generated by the brand-name drug manufacturers</a>, who have an incentive to list patents that <a href="https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-challenges-more-100-patents-improperly-listed-fdas-orange-book">don’t actually cover their drugs</a>. Doing so increases the burden on generic drug manufacturers. The list is not checked by the FDA or anyone else, and there are few mechanisms for anyone other than the brand-name manufacturer to tell the FDA to remove a patent from the list. </p>
<p>Even when retractions are possible, they are effective only when readers pay attention to them. Financial data is sometimes retracted and corrected, but the revisions are not timely. “<a href="https://www.wsj.com/finance/investing/economic-data-lead-markets-and-governments-astray-abd79102">Markets don’t tend to react to revisions</a>,” Paul Donovan, chief economist of UBS Global Wealth Management, told the Wall Street Journal, referring to governments revising gross domestic product figures.</p>
<p>Misinformation is a growing problem. There are no easy answers to solve it. But there are steps that would almost certainly help. One relatively straightforward one is for trusted data sources like those from the government to follow the lead of scientific journals and create a mechanism to retract erroneous information.</p><img src="https://counter.theconversation.com/content/216709/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Janet Freilich does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Theranos was dissolved years ago, and its CEO, Elizabeth Holmes, is in prison, but the company’s patents based on bad science live on – a stark example of the persistence of faulty information.Janet Freilich, Associate Professor of Law, Fordham UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1875032022-08-18T12:39:23Z2022-08-18T12:39:23ZFake research can be harmful to your health – a new study offers a tool for rooting it out<figure><img src="https://images.theconversation.com/files/479736/original/file-20220817-7931-21c2t3.jpg?ixlib=rb-1.1.0&rect=374%2C64%2C8240%2C5489&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Although most medical research is reliable, studies that are flawed or fake can lead to patients undergoing treatments that might cause harm.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/two-biotechnologists-examining-data-for-medical-royalty-free-image/881494610?adppopup=true">skynesher/E+ via Getty Images</a></span></figcaption></figure><p>If you are suffering with chronic pain, diabetes, heart problems or any other condition, you want to be confident that your doctor will offer you an effective treatment. You certainly don’t want to waste time or money on something that won’t work, or take something that could do you harm. </p>
<p>The best source of information to guide treatment is medical research. But how do you know when that information is reliable and evidence-based? And how can you tell the difference between shoddy research findings and those that have merit?</p>
<p>There’s a long journey to the publication of research findings. Scientists design experiments and studies to investigate questions about treatment or prevention, and follow certain scientific principles and standards. Then the finding is submitted for publication in a research journal. Editors and other people in the researchers’ field, called peer-reviewers, make suggestions to improve the research. When the study is deemed acceptable, it is published as a research journal article. </p>
<p>But a lot can go wrong on this long journey that could make a research journal article unreliable. And peer review is not designed to catch fake or misleading data. Unreliable scientific studies can be hard to spot – whether by reviewers or the general public – but by asking the right questions, it can be done. </p>
<p>While most research has been conducted according to rigorous standards, studies with fake or fatally flawed findings are sometimes published in the scientific literature. It is hard to get an exact estimate of the number of fraudulent studies because the scientific publication process catches some of them before they are published. One study of 526 patient trials in anesthesiology <a href="https://doi.org/10.1111/anae.15263">found that 8% had fake data and 26% were critically flawed</a>. </p>
<p>As a professor in medicine and public health, I have been studying bias in the <a href="https://www.cuanschutz.edu/centers/bioethicshumanities/facultystaff/lisa-bero-phd">design, conduct and publication of scientific research for 30 years</a>. I’ve been developing ways to prevent and detect research integrity problems so the best possible evidence can be synthesized and used for decisions about health. Sleuthing out data that cannot be trusted, whether this is due to intentional fraud or just bad research practices, is key to using the most reliable evidence for decisions. </p>
<h2>Systematic reviews help suss out weak studies</h2>
<p>The most reliable evidence of all comes when researchers pull the results of several studies together in what is <a href="https://doi.org/10.1136/bmj.309.6954.597">known as a systematic review</a>. Researchers who conduct systematic reviews identify, evaluate and summarize all studies on a particular topic. They not only sift through and combine results on perhaps tens of thousands of patients, but can use an extra filter to catch potentially fraudulent studies and ensure they do not feed into recommendations. This means that the more rigorous studies have the most weight in a systematic review and bad studies are excluded based on strict inclusion and exclusion criteria that are applied by the reviewers.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/egJlW4vkb1Y?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Systematic reviews explained.</span></figcaption>
</figure>
<p>To better understand how systematic reviewers and other researchers can identify unreliable studies, my research team interviewed a group of 30 international experts from 12 countries. They explained to us that a shoddy study can be hard to detect because, as one expert explained, it is “designed to pass muster on first glance.” </p>
<p>As our <a href="https://doi.org/10.1016/j.jclinepi.2022.07.006">recently published study reports</a>, some studies look like their data has been massaged, some studies are not as well designed as they claim to be, and some may even be completely fabricated. </p>
<p>Our study provides some important ideas about how to spot medical research that is deeply flawed or fake and should not be trusted. </p>
<p>The experts we interviewed suggested some key questions that reviewers should ask about a study: For instance, did it have ethics approval? Was the <a href="https://www.biomedcentral.com/getpublished/writing-resources/trial-registration#">clinical trial registered</a>? Do the results seem plausible? Was the study funded by an independent source and not the company whose product is being tested?</p>
<p>If the answers to any of these questions is no, then further investigation of the study is needed. </p>
<p>In particular, my colleagues and I found that it’s possible for researchers who review and synthesize evidence to create a checklist of warning signs. These signs don’t categorically prove that research is fraudulent, but they do show researchers as well as the general public which studies need to be looked at more carefully. We used these warning signs to create a screening tool – a set of questions to ask about how a study is done and reported – that provide clues about whether a study is real or not.</p>
<p>Signs include important information that’s missing, like details of ethical approval or where the study was carried out, and data that seems too good to be true. One example might be if the number of patients in a study exceeds the number of people with the disease in the whole country. </p>
<h2>Spotting flimsy research</h2>
<p>It’s important to note that our new study does not mean all research can’t be trusted. </p>
<p>The COVID-19 pandemic offers examples of how systematic review ultimately filtered out fake research that had been published in the medical literature and disseminated by the media. Early in the pandemic, when the pace of medical research was accelerating, robust and well-run patient trials – and the systematic reviews that followed – helped the public learn which interventions work well and which were not supported by science.</p>
<p>For example, <a href="https://theconversation.com/ivermectin-is-a-nobel-prize-winning-wonder-drug-but-not-for-covid-19-168449">ivermectin, an antiparasitic drug</a> that is typically used in veterinary medicine and that was promoted by some without evidence as a treatment for COVID-19, <a href="https://doi.org/10.1038/d41586-020-02958-2">was widely embraced</a> in some parts of the world. However, after ruling out fake or flawed studies, a systematic review of research on ivermectin found that <a href="https://doi.org/10.1002/14651858.CD015017.pub3">it had “no beneficial effects</a> for people with COVID-19.”</p>
<p>On the other hand, a systematic review of corticosteroid drugs like dexamethasone found that <a href="https://doi.org/10.1002/14651858.CD014963">the drugs help prevent death</a> when used as a treatment for COVID-19.</p>
<p>There are efforts underway across the globe to ensure that the highest standards of medical research are upheld. Research funders are asking scientists to publish all of their data so it can be fully scrutinized, and medical journals that publish new studies are beginning to screen for suspect data. But everyone involved in research funding, production and publication <a href="https://doi.org/10.1038/d41586-022-00025-6">should be aware</a> that fake data and studies are out there. </p>
<p>The screening tool proposed in our new research is designed for systematic reviewers of scientific studies, so a certain level of expertise is needed to apply it. However, using some of the questions from the tool, both researchers and the general public can be better equipped to read about the latest research with an informed and critical eye.</p><img src="https://counter.theconversation.com/content/187503/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lisa Bero is Senior Editor, Research Integrity for Cochrane, an international non-profit organization that publishes systematic reviews.</span></em></p>A new screening tool to help study reviewers identify what’s fake or shoddy in research may be on the horizon. And everyday people can apply some of the same critical analysis tools.Lisa Bero, Research Professor Public Health and Medicine, University of Colorado Anschutz Medical CampusLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/863392017-10-29T23:18:10Z2017-10-29T23:18:10ZExpertise in sciences and the decision of what is publishable: a noble yet endangered task<figure><img src="https://images.theconversation.com/files/191800/original/file-20171025-25510-17dutu4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">DR</span> </figcaption></figure><p>When Einstein discovered the peer-review process…</p>
<p>In 1935, a too-quick calculation led Einstein to believe that gravitation waves could not exist in the frame of the general relativity theory (these waves, observed in 2016 only, are actually a cornerstone of his work). The <a href="http://www.geology.cwu.edu/facstaff/lee/courses/g503/Einstein_review.pdf">story of his initial publication</a> is richer than the subtle error behind it…</p>
<p>For the first time, Einstein – who had sent his manuscript to the prestigious <em>Physical Review</em> – faced the anonymous peer-review system. The sharp-minded reviewer, whose identity was <a href="http://www.geology.cwu.edu/facstaff/lee/courses/g503/Einstein_review.pdf">revealed only in 2005</a>, pointed out an error. Einstein strongly disagreed with the idea that an editor could review his work without his consent. He then sent his manuscript to another review, which decided to publish it. But when the time came to check the manuscript’s proofs, Einstein totally revised his paper. This story, both exemplary and exceptional, illustrates the complex relationship between the scientists and the publications.</p>
<h2>The daily life of expertise in scientific publications</h2>
<p>The German journals in which Einstein published had a low rejection rate of submitted materials, and were instead open to controversies and scientific debates. Nevertheless, the immense growth of scientific activity forced all scientific journals to follow the example of <em>Physical Review</em>, with one or two (even three sometimes) anonymous reviewers.</p>
<p>Rather than the editor, who’s responsible for the selection of the publications, the reviewers are those who judge whether the work is valid and deserves the (rather subjective) recognition attached to publication in a given review. Further criticism remains possible: comments to papers can be published after approval by a reviewing process, possibly followed by an author reply.</p>
<p>Once scientists are contacted as possible reviewers by the editor, they have only a short period of time to accept. They will have few weeks – sometimes even less – to deliver informed opinions on the suitability of the manuscript. This unpaid work remains anonymous (with some exceptions).</p>
<p>From my own experience, the time required can vary from one hour to three days. The noble and exciting part of reviewing is that the process of critical reading and author replies sometimes creates a kind of “co-production” for the content that finally appears in the journal: The reviewer often helps improve the paper’s readability and presentation, and sometimes the reviewer suggests broader openings not mentioned by the authors. The reviewer can also discover small or even serious errors, though generally they’re not critical for the work’s conclusion, which is thus free of errors when finally published.</p>
<h2>Finding good reviewers: the tough part of an editor’s job</h2>
<p>Finding available reviewers is a difficult task for journal editors. Specialists with both the required expertise and a sufficiently large view of the field are rare. They are busy people, and often prefer to review manuscripts when they introduce new ideas rather than evaluating the correctness of a merely incremental paper. Early-career scientists are often more open, as they can enjoy participating in the peer-reviewing process, which is at the heart of the academic system they want to join.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=437&fit=crop&dpr=1 600w, https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=437&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=437&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=549&fit=crop&dpr=1 754w, https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=549&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/174020/original/file-20170615-23574-i6z8up.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=549&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Manuscript status, as found on the server of one of the top five journals specialized in optics. It concerns a co-authored manuscript, which has appeared in April 2017. In the end, reviewer 2 wrote a report, reviewer 1 never responded, and the paper was accepted before reviewer 3 gave his opinion.</span>
<span class="attribution"><span class="source">Daniel Bloch</span></span>
</figcaption>
</figure>
<p>One problem is the growth in the number of submitted manuscripts, which in turn require more and more reviewers. Editor cannot truly know – either scientifically or for the quality of their ethics – all the reviewers he has in his pool, and this can lead to a number of biases.</p>
<p>Authors are often encouraged or even requested to propose possible reviewers for their papers. For a good journal, this can be a way to accurately identify the sub-domain of the manuscript and will make it easier to find recognized experts. It is clearly good when the original reviewer declines to submit a report because he or she isn’t close enough to the field of research, but can still identify one or more experts that the editor wouldn’t have been able to identify. With lower-quality journals, the editor may decide to lazily follow the suggestions of the authors, at the risk of being oriented to friends or to people from a same small community trying to exaggerate the importance of their own field. Even worse, the suggested reviewers can sometimes be the authors themselves, hidden behind an electronic alias bearing the name of a supposed specialist.</p>
<p>The anonymous refereeing process can have other drawbacks as well. More than a few researchers know someone whose paper was rejected, but whose idea miraculously reappears under the name of a colleague who was suggested as a reviewer. The problem has been reduced by the development of sites that publish preprints. The practice of “anonymous” reviewers asking that a reference be added to their own works is relatively common, but often transparent.</p>
<h2>Recognized journals, predatory ones, and other bad practices</h2>
<p>The pressure to publish – the famous “publish or perish” – and role of chance in any expertise means that any reasonable manuscript free of gross errors will end up being published, even after rejection by one or more journals. Rather than being abnormal, this explains how a hierarchy of journals can be established. Because on-line publications are truly cheap compared printed ones, <a href="http://www.nature.com/nature/journal/v544/n7651/full/544416b.html">“predatory” journals</a> now appear claiming to be “peer-reviewed”. Such publications, easily recognized by true scientists, publish for a fee any work claiming to be for a scientific audience, and provide a vague “referee report”.</p>
<p>These deceptive practices crop up because there is a considerable growth of higher education at a world level. In most cases, the costs for publishing in such predatory reviews will be billed to the university itself, and for faculty from peripheral institutions, publishing can be profitable in terms of recognition and career advancement. Similar practices in the refereeing process also occasionally occur in the <a href="http://www.nature.com/news/peer-review-activists-push-psychology-journals-towards-open-data-1.21549">humanities and social sciences</a>, where nonsense texts that resemble academic work, sometimes computer-generated, can pass the <a href="http://zilsel.hypotheses.org/2548">“refereeing process”</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=365&fit=crop&dpr=1 600w, https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=365&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=365&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=459&fit=crop&dpr=1 754w, https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=459&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/174263/original/file-20170617-16217-1uwepjm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=459&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A hoax around a ‘conceptual penis’. This nonsense paper was published by Cogent and finally retracted. An article by <em>Charlie Hebdo</em> explains peer-reviewing to a broad audience.</span>
<span class="attribution"><span class="source">_Charlie Hebdo_ and Cogent</span></span>
</figcaption>
</figure>
<h2>Some paths for improvement</h2>
<p>“Peer-reviewing” is essential for the advancement of science. However, the standard of publications with reviewers is affected by the development of on-line publications and the growing number of papers. Some paths are worth being considered to <a href="http://www.academie-sciences.fr/pdf/rapport/avis131216.pdf">improve the system</a>:</p>
<ul>
<li><p>Reviewer reports can be made available on-line, helping readers understand the context of the work and of its publication, as any critics of a work, in art and in science as well. Such a practice is now being <a href="https://www.nature.com/articles/ncomms10277">tested by some journals</a>. It is also a way to establish the quality of a journal through its ability to select appropriate and sharp referees.</p></li>
<li><p>Some high-level reviews have considered a temporary electronic deposit, open to comments by researchers in the appropriate field, before deciding on the validation as a publication. This would be available only for voluntary readers, and when the comments are favourable. Most likely, such a system would work only for top papers that are likely to attract known specialists as readers.</p></li>
<li><p>Requiring that submissions and the comments they receive be tracked from journal to journal would limit the publication of lower-quality work. Currently, a manuscript that’s rejected by one review is sent free of critiques when submitted to another, except when the two are within the same editorial group. Showing that one improved a piece of research in response to criticism isn’t an admission of failure, but natural and laudable.</p></li>
</ul>
<p>As an author, I would be more confident in receiving a fair evaluation from a journal for a manuscript if I could show how it was improved by the comments received during a prior submission. Presently, I have the feeling that I would be infringing the intellectual property of the first journal and its referees, and so refrain from attaching such material.</p><img src="https://counter.theconversation.com/content/86339/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Daniel Bloch has received public financing for his research ((national, European and international).</span></em></p>How is a scientific article accepted for publication in an academic journal? What is the role of peer reviewers? Where does the system go astray?Daniel Bloch, Directeur de recherche au CNRS, physicien, spécialiste d’optique, lasers et nanotechnologies, Université Sorbonne Paris NordLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768482017-05-08T00:53:28Z2017-05-08T00:53:28ZPeople don’t trust scientific research when companies are involved<figure><img src="https://images.theconversation.com/files/168205/original/file-20170507-19132-lu45yn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">People seem to think industry-funded research belongs in the garbage.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hiv-testing-laboratory-singleuse-plastic-syringes-506118349">mllejules/Shutterstock.com</a></span></figcaption></figure><p>A soda company sponsoring <a href="https://well.blogs.nytimes.com/2015/08/09/coca-cola-funds-scientists-who-shift-blame-for-obesity-away-from-bad-diets/">nutrition research</a>. An oil conglomerate <a href="https://insideclimatenews.org/news/26052016/agu-american-geophysical-union-exxon-climate-change-denial-science-sponsorship">helping fund a climate-related research meeting</a>. Does the public care who’s paying for science?</p>
<p>In a word, yes. When industry funds science, credibility suffers. And this does not bode well for the types of public-private research partnerships that appear to be becoming <a href="http://www.rdmag.com/article/2015/04/how-academic-institutions-partner-private-industry">more prevalent</a> as <a href="https://www.nsf.gov/statistics/2016/nsb20161/#/report/chapter-4/recent-trends-in-u-s-r-d-performance">government funding for research and development lags</a>. </p>
<p>The recurring topic of conflict of interest has made headlines in recent weeks. The National Academies of Science, Engineering, and Medicine has <a href="http://www.the-scientist.com/?articles.view/articleNo/49331/title/National-Academies-Revise-Conflict-of-Interest-Policy/">revised its conflict of interest guidelines</a> following <a href="https://doi.org/10.1371/journal.pone.0172317">questions about whether members</a> of a recent expert panel on GMOs had industry ties or other financial conflicts that were not disclosed in the panel’s final report.</p>
<p><a href="https://doi.org/10.1371/journal.pone.0175643">Our own recent research</a> speaks to how hard it may be for the public to see research as useful when produced with an industry partner, even when that company is just one of several collaborators.</p>
<h2>What people think of funding sources</h2>
<p>We asked our study volunteers what they thought about a proposed research partnership to study the potential risks related to either genetically modified foods or trans fats.</p>
<p>We randomly assigned participants to each evaluate one of 15 different research partnership arrangements – various combinations of scientists from a university, a government agency, a nongovernmental organization and a large food company.</p>
<p>For example, 1/15th of participants were asked to consider a research collaboration that included only university researchers. Another 1/15th of participants considered a research partnership that included both university and government scientists, and so on. In total we presented four conditions where there was a single type of researcher, another six collaborations with two partners, four with three partners and one with all four partners. </p>
<p><iframe id="O9jF8" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/O9jF8/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>When a research team included an industry partner, our participants were generally less likely to think the scientists would consider a full range of evidence and listen to different voices. An industry partner also reduced how much participants believed any resulting data would provide meaningful guidance for making decisions.</p>
<p>At the outset of our work, we thought including a diverse array of partners in a research collaboration might mitigate the negative perceptions that come with industry involvement. But, while including scientists from a nonindustry organization (particularly a nongovernmental organization) made some difference, the effect was small. Adding a government partner provided no substantive additional benefit.</p>
<p>When we asked participants to describe what they thought about the research partnership in their own words, they were skeptical whether an industry partner could ever be trusted to release information that might hurt its profits.</p>
<p>Our results may be even more troubling because we chose a company with a good reputation. We used pretests to select particular examples – of a corporation, as well as a university, government agency and nongovernmental organization – that had relatively high positive ratings and relatively low negative ratings in a test sample.</p>
<h2>Can industry do valid science?</h2>
<p>You don’t have to look far for real-life examples of poorly conducted or intentionally misleading industry research. The <a href="https://www.justice.gov/opa/pr/glaxosmithkline-plead-guilty-and-pay-3-billion-resolve-fraud-allegations-and-failure-report">pharmaceutical</a>, <a href="https://www.theguardian.com/environment/2016/sep/22/pesticide-manufacturers-own-tests-reveal-serious-harm-to-honeybees">chemical</a>, <a href="https://www.nytimes.com/2016/09/13/well/eat/how-the-sugar-industry-shifted-blame-to-fat.html">nutrition</a> and <a href="https://www.theguardian.com/environment/2015/mar/25/fossil-fuel-firms-are-still-bankrolling-climate-denial-lobby-groups">petroleum</a> industries have all weathered criticism of their research integrity, and for good reason. These ethically questionable episodes no doubt fuel public skepticism of industry research. Stories of pharmaceutical companies conducting <a href="https://doi.org/10.1371/journal.pmed.0020138">less than rigorous clinical trials</a> for the benefit of their marketing departments, or the tobacco industry steadfastly denying the connection between smoking and cancer in the face of mounting evidence, help explain public concern about industry-funded science. </p>
<p>But industry generally has a long and impressive history of supporting scientific research and technical development. Industry-supported research has <a href="https://www.wired.com/2012/09/ff-corning-gorilla-glass/">generated widely adopted technologies</a>, <a href="http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law">driven the evolution of entire economic sectors</a>, <a href="http://articles.latimes.com/2012/oct/20/local/la-me-stanford-ovshinsky-20121021">improved processes that were harmful to public health and the environment</a> and <a href="https://www.bell-labs.com/our-people/recognition/">won Nobel Prizes</a>. And as scientists not currently affiliated with industry scramble to fund their research in an era of tight budgets, big companies have money to underwrite science.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Does it matter within what kind of institution a researcher hangs her lab coat?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/biologycourses/7006382260">Vivien Rolfe</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Can this lack of trust be overcome? Moving forward, it will be essential to address incentives such as short-term profit or individual recognition that can encourage poor research – in any institutional context. By showing how quickly people may judge industry-funded research, our work indicates that it’s critical to think about how the results of that research can be communicated effectively. </p>
<p>Our results should worry those who want research to be evaluated largely on its scientific merits, rather than based upon the affiliations of those involved. </p>
<p>Although relatively little previous scholarship has investigated this topic, we expected to find that including multiple, nonindustry organizations in a scientific partnership might, at least partly, assuage participants’ concerns about industry involvement. This reflects our initial tentative belief that, given the resources and expertise within industry, there must be some way to create public-private partnerships that produce high-quality research which is perceived widely as such.</p>
<p><a href="http://msutoday.msu.edu/news/2017/public-skeptical-of-research-if-tied-to-a-company/">Our interdisciplinary team</a> – a risk communication scholar, a sociologist, a philosopher of science, a historian of science and a toxicologist – is also examining philosophical arguments and historical precedents for guidance on these issues.</p>
<p>Philosophy can tell us a great deal about how the values of investigators <a href="https://global.oup.com/academic/product/a-tapestry-of-values-9780190260811?lang=en&cc=us">can influence their results</a>. And history shows that not so long ago, up until a few decades after World War II, many considered industry support <a href="http://physicstoday.scitation.org/doi/10.1063/PT.3.3081">a way to uphold research integrity</a> by protecting it from government secrecy regimes.</p>
<p>Looking forward, we are planning additional social scientific experiments to examine how specific procedures that research partnerships sometimes use may affect public views about collaborations with industry partners. For example, perhaps open-data policies, transparency initiatives or external reviewer processes may alleviate bias concerns.</p>
<p>Given the central role that industry plays in scientific research and development, it is important to explore strategies for designing multi-sector research collaborations that can generate legitimate, high-quality results while being perceived as legitimate by the public.</p><img src="https://counter.theconversation.com/content/76848/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joseph D. Martin receives funding from the National Science Foundation.</span></em></p><p class="fine-print"><em><span>Aaron M. McCright, John C. Besley, Kevin Elliott, and Nagwan Zahry do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Scientists need funding to do their work. But a new study finds turning to industry partners taints perceptions of university research, and including other kinds of partners doesn’t really help.John C. Besley, Associate Professor of Advertising and Public Relations, Michigan State UniversityAaron M. McCright, Associate Professor of Sociology, Michigan State UniversityJoseph D. Martin, Fellow-in-Residence at the Consortium for History of Science, Technology, and Medicine and Visiting Research Fellow at the Centre for History and Philosophy of Science, University of LeedsKevin Elliott, Associate Professor of Fisheries & Wildlife and Philosophy, Michigan State UniversityNagwan Zahry, PhD Student in Media and Information Studies, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/769672017-05-04T22:18:47Z2017-05-04T22:18:47ZBehind closed doors: What the Piltdown Man hoax from 1912 can teach science today<figure><img src="https://images.theconversation.com/files/167589/original/file-20170502-17267-1pa48wd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When new discoveries are jealously guarded under lock and key, science suffers.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/rightee/260028769">Andy Wright</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>In 1912, Charles Dawson, an amateur archaeologist in England, claimed he’d made one of the most important fossil discoveries ever. Ultimately, however, his “Piltdown Man” proved to be a hoax. By cleverly pairing a human skull with an orangutan’s jaw – stained to match and give the appearance of age – a mysterious forger duped the scientific world.</p>
<p>In the decades between the find’s unearthing and the revelation it was fraudulent, people in the United States and around the world learned about Piltdown Man as a “missing link” connecting ape and man. Newspaper articles, scientific publications <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674660410">and museum exhibitions</a> all presented Piltdown Man as a legitimate scientific discovery supporting a particular vision of human evolution.</p>
<p>Historians, science writers and others have <a href="https://www.amazon.com/Piltdown-Scientific-Forgery-Frank-Spencer/dp/0198585225">investigated the Piltdown Man controversy</a> over the years, shedding <a href="https://global.oup.com/academic/product/the-piltdown-forgery-9780198607809?q=Piltdown&lang=en&cc=us">new light on the fraud</a>. As we reconsider the nature of “<a href="http://www.cnn.com/2017/01/22/politics/kellyanne-conway-alternative-facts/">facts</a>,” “<a href="https://www.theguardian.com/media/2016/dec/18/what-is-fake-news-pizzagate">fake news</a>” and knowledge production, it’s worthwhile to revisit the Piltdown Man episode.</p>
<p>At issue was not just the deliberate hoax, but also the incomplete flow of information about the purported human ancestor. Soon after the discovery, access to the original materials in England was cut off by a few gatekeepers. Science is suffocated when researchers are unable to reliably corroborate claims made by others. The same issues arise today, with the research community grappling with what’s been called a <a href="http://www.nature.com/news/reproducibility-1.17552">reproducibility crisis</a>; scientists need access to evidence and data in order to replicate (or not) research results. The Piltdown Man controversy lends support to the modern <a href="https://osf.io/preprints/psyarxiv/ak6jr">open science movement</a>, with its call for transparency at every step of the scientific process. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=438&fit=crop&dpr=1 600w, https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=438&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=438&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=551&fit=crop&dpr=1 754w, https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=551&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/167586/original/file-20170502-17285-1pjrddd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=551&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Piltdown Man believers kept tight control over who could get an up-close look at the fossils. Arthur Keith is pictured in the white coat, Charles Dawson over his left shoulder.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Piltdown_gang_(dark).jpg">John Cooke</a></span>
</figcaption>
</figure>
<h2>Limited firsthand access</h2>
<p>Experts immediately cited the discovery of a large human-like cranium with a primitive-looking, ape-like jaw as a major breakthrough. Influential anatomists such as Sir Arthur Keith hailed Piltdown Man as authentic. The popular press on both sides of the Atlantic described prehistoric archaeology as a dramatic hunt for a missing link and came to embrace Piltdown Man within an oversimplified framework of human evolution.</p>
<p>But there were some scientists – notably British Museum curator Reginald A. Smith – who were skeptical from the outset. Doubters noted the major find was attributed to a previously little-known archaeologist.</p>
<p>Curators in the United States impatiently hoped to learn more. But transatlantic requests were denied by their counterparts in Britain who controlled access to the cranium and jaw, moving the bones to a secure vault at the Museum of Natural History in London. Rumors swirled. </p>
<p>Controversial Smithsonian curator <a href="http://anthropology.si.edu/naa/fa/Hrdlicka_Ales.pdf">Aleš Hrdlička describes in an annual report</a> traveling to England himself:</p>
<blockquote>
<p>“Regrettably… the specimen was not yet available for examination by outsiders, and so no original opinion can be given concerning its status. It represents doubtless one of the most interesting finds relating to man’s antiquity, though seemingly the last word had not yet been said as to its date and especially as to the physical characteristics of the being it stands for.”</p>
</blockquote>
<p>Early in the 20th century, provocative claims about discoveries commonly circulated through letters, rumors and splashy newspaper articles suggesting major new finds. American museums were simultaneously intrigued and frustrated by word of significant finds like Piltdown Man. Some claims proved to be genuine, while many others were found to be falsified or misleading. With limited information, it was especially difficult to determine the validity of claims made by scientists abroad.</p>
<p>News about major discoveries might change planned exhibitions about human evolution or prehistory at museums in New York or Chicago, or influence what students were taught about human history. Uncertainty plagued museums in this regard, as their scientists tried to view skeletons firsthand on visits to European museums and to secure good casts or copies for their own collections. Even amid growing doubts, a major exhibition in San Diego that opened in 1915 <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674660410">prominently featured a Piltdown Man sculpture</a>.</p>
<h2>What’s the damage done?</h2>
<p>This lack of transparency resulted in an absence of accurate information in the scientific community.</p>
<p>It ultimately took until the later decades of the 20th century for the Piltdown bones to be fully discredited. The hoax was <a href="https://theconversation.com/solving-the-piltdown-man-crime-how-we-worked-out-there-was-only-one-forger-63615">likely created by Dawson himself</a>, though <a href="https://www.amazon.com/Piltdown-Men-Ronald-William-Millar/dp/057500536X">who exactly concocted the scam is still debated</a> – “Sherlock Holmes” author <a href="http://www.telegraph.co.uk/science/2016/08/10/sir-arthur-conan-doyle-cleared-of-piltdown-man-hoax/">Arthur Conan Doyle’s name has even been mentioned</a> as a <a href="https://theconversation.com/a-new-twist-to-whodunnit-in-sciences-famous-piltdown-man-hoax-64470">possible perpetrator</a>.</p>
<p>As Berkeley anthropologist <a href="http://anthropology.si.edu/naa/fa/spencer.pdf">Sherwood Washburn offered in a letter</a>, “My opinion is that if more people had seen the originals sooner the fake would have been recognized.” Confusion had arisen because <a href="https://www.youtube.com/watch?v=2LtOkhpR3hY">so few scholars were granted access</a> to the original evidence.</p>
<p>Part of what finally put Piltdown Man to rest was the nature of new discoveries emerging. They informed researchers’ developing understanding of the human past and began turning much scientific attention away from Europe toward Asia and Africa.</p>
<p>While it is impossible to know with certainty, the Piltdown Man episode likely slowed scientific progress in the global search for human ancestors. What is clear is that the claims worked to muddle popular knowledge about human evolution.</p>
<h2>Piltdown Man’s lessons for today</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=587&fit=crop&dpr=1 600w, https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=587&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=587&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=738&fit=crop&dpr=1 754w, https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=738&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/167590/original/file-20170502-17267-1l3115g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=738&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Museums still display Piltdown Man replicas, not as science but as cautionary reminder.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:Sterkfontein_Piltdown_man.jpg">Anrie</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>The unknown forger behind Piltdown Man intentionally misled the world about human evolution. The false claims rippled through the news media and museum exhibitions. Without access to reliable sources, in this case the original bones, the fraudulent story of Piltdown Man spread like a slowly building wildfire. </p>
<p>The Piltdown Man controversy hints at the dangers of drawing conclusions based on limited or emerging information, for both the public and scientists. In some ways, the whole episode foreshadowed threats we face now <a href="http://www.slate.com/blogs/future_tense/2017/04/13/facebook_s_latest_attempt_to_fight_fake_news_makes_it_seem_more_helpless.html">from fake news</a> and the spread of misinformation about science and many other topics. It is hard to get to the truth – whether about a news story or scientific theory – without access to the evidence supporting it.</p>
<p>Certainly new information flows much more rapidly today – thanks to the internet and social media – potentially a partial corrective to the problems connected to misleading claims. However, scientists and others still need access to accurate and reliable information from original sources. With the Piltdown Man remains locked away in a secure museum vault, speculation and misinformation flourished.</p>
<p>Support is now building for an <a href="http://www.digital-scholarship.org/cwb/WhatIsOA.htm">open access</a> research model: When possible and appropriate, original materials, data and preliminary findings should be made available to others in the field. Scientists also work to balance <a href="https://www.theguardian.com/science/2015/oct/25/discovery-human-species-accused-of-rushing-errors">how quickly they publish new research</a>: It takes time to do careful work, but keeping finds hidden away for too long also impedes progress and understanding. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/167785/original/file-20170503-21649-1x026xi.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Excavations continue in the hobbit cave in Indonesia.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/12394349@N06/14748473277">Bryn Pinzgauer</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Consider a 2003 find from Indonesia that was as shocking as the discovery of Piltdown Man: a nearly complete female skeleton researchers suggested was from a tiny human ancestor they called <a href="http://humanorigins.si.edu/evidence/human-fossils/species/homo-floresiensis">Homo floresiensis</a> (commonly nicknamed “hobbit”). Media speculation ran wild early on about this new species added to our family tree, but paleoanthropology has evolved a great deal since Piltdown Man.</p>
<p>Scientists from several different groups worked to <a href="http://www.livescience.com/29100-homo-floresiensis-hobbit-facts.html">understand the discovery</a> – seeking related finds and going back to the original fossils to systematically assess the claim. Soon additional <a href="https://dx.doi.org/10.1038/nature02999">detailed scientific publications began to emerge</a>, allowing the scientific community to continue <a href="https://dx.doi.org/10.1038/nature04022">to add to the evidence</a> and better <a href="https://dx.doi.org/10.1038/4311029a">scrutinize the discovery</a>. To date, the teeth of at many as 12 individuals have been found.</p>
<p>Homo floresiensis are likely a genuinely groundbreaking discovery – hopefully the more transparent way the research unfolded makes this easier to untangle than Dawson’s claims a century ago. Thoughtful collaboration, making data available openly, more effective <a href="https://www.theguardian.com/science/political-science/2016/may/10/what-has-science-communication-ever-done-for-us">popular science communication</a> and multiple channels of accurate information may help us better respond to the next Piltdown Man.</p><img src="https://counter.theconversation.com/content/76967/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Samuel Redman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A century-old case of scientific fraud illustrates how hard it is to untangle the truth when access to new discoveries is limited.Samuel Redman, Assistant Professor of History, UMass AmherstLicensed as Creative Commons – attribution, no derivatives.