tag:theconversation.com,2011:/ca/topics/research-integrity-6749/articlesResearch integrity – The Conversation2024-03-06T13:51:35Ztag:theconversation.com,2011:article/2246502024-03-06T13:51:35Z2024-03-06T13:51:35ZFake academic papers are on the rise: why they’re a danger and how to stop them<figure><img src="https://images.theconversation.com/files/579491/original/file-20240304-43060-jgl8bk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Fake academic articles can cause significant harm.</span> <span class="attribution"><span class="source">Nora Carol Photography</span></span></figcaption></figure><p>In the 1800s, British colonists in India set about trying to reduce the cobra population, which was making life and trade very difficult in Delhi. They began to pay a bounty for dead cobras. The strategy <a href="https://fee.org/articles/the-cobra-effect-lessons-in-unintended-consequences/">very quickly resulted in the widespread breeding of cobras for cash</a>. </p>
<p>This danger of unintended consequences is sometimes referred to as the “<a href="https://econowmics.com/the-cobra-effect-unintended-consequences/">cobra effect</a>”. It can also be well summed up by <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7901608/">Goodhardt’s Law</a>, named after British economist Charles Goodhart. He stated that, when a measure becomes a target, it ceases to be a good measure.</p>
<p>The cobra effect has taken root in the world of research. The “publish or perish” culture, which values publications and citations above all, has resulted in its own myriad of “cobra breeding programmes”. That includes the widespread practice of questionable research practices, like playing up the impact of research findings to make work more attractive to publishers. </p>
<p>It’s also led to the rise of paper mills, criminal organisations that sell academic authorship. <a href="https://publicationethics.org/sites/default/files/paper-mills-cope-stm-research-report.pdf">A report on the subject</a> describes paper mills as (the)</p>
<blockquote>
<p>process by which manufactured manuscripts are submitted to a journal for a fee on behalf of researchers with the purpose of providing an easy publication for them, or to offer authorship for sale.</p>
</blockquote>
<p>These fake papers have serious consequences for research and its impact on society. Not all fake papers are retracted. And even those that are often still make their way into systematic literature reviews which are, in turn, used to draw up policy guidelines, clinical guidelines, and funding agendas. </p>
<h2>How paper mills work</h2>
<p>Paper mills rely on the desperation of researchers — often young, often overworked, often on the peripheries of academia struggling to overcome the high obstacles to entry — to fuel their business model. </p>
<p>They are frighteningly successful. The website of one such company based in Latvia advertises the publication of more than 12,650 articles since its launch in 2012. In <a href="https://publicationethics.org/sites/default/files/paper-mills-cope-stm-research-report.pdf">an analysis</a> of just two journals jointly conducted by the Committee on Publications Ethics and the International Association of Scientific, Technical and Medical Publishers, more than half of the 3440 article submissions over a two-year period were found to be fake. </p>
<p>It is <a href="https://www.nature.com/articles/d41586-023-03464-x">estimated</a> that all journals, irrespective of discipline, experience a steeply rising number of fake paper submissions. Currently the rate is about 2%. That may sound small. But, given the large and growing amount of scholarly publications it means that a lot of fake papers are published. Each of these can seriously damage patients, society or nature when applied in practice.</p>
<h2>The fight against fake papers</h2>
<p>Many individuals and organisations are fighting back against paper mills.</p>
<p>The scientific community is lucky enough to have several “fake paper detectives” who volunteer their time to root out fake papers from the literature. <a href="https://www.nature.com/articles/d41586-020-01363-z">Elizabeth Bik</a>, for instance, is a Dutch microbiologist turned science integrity consultant. She dedicates much of her time to searching the biomedical literature for manipulated photographic images or plagiarised text. There are <a href="https://www.nature.com/articles/d41586-019-00439-9">others</a> <a href="https://www.aps.org/publications/apsnews/202307/wise.cfm">doing this work</a>, too.</p>
<p>Organisations such as <a href="https://blog.pubpeer.com/publications/45D03A8E43685FFF089F58330F5DC5#1*">PubPeer</a> and <a href="https://retractionwatch.com/">Retraction Watch</a> also play vital roles in flagging fake papers and pressuring publishers to retract them.</p>
<p>These and other initiatives, like the <a href="https://www.stm-assoc.org/stm-integrity-hub/">STM Integrity Hub</a> and <a href="https://united2act.org/">United2Act</a>, in which publishers collaborate with other stakeholders, are trying to make a difference. </p>
<p>But this is a deeply ingrained problem. The use of generative artificial intelligence like ChatGPT will help the detectives – but will also likely result in more fake papers which are now more easy to produce and more difficult or even impossible to detect.</p>
<h2>Stop paying for dead cobras</h2>
<p>They key to changing this culture is a switch in researcher assessment. </p>
<p>Researchers must be acknowledged and rewarded for responsible research practices: a focus on transparency and accountability, high quality teaching, good supervision, and excellent peer review. This will extend the scope of activities that yield “career points” and shift the emphasis of assessment from quantity to quality.</p>
<p>Fortunately, several initiatives and strategies already exist to focus on a balanced set of performance indicators that matter. The <a href="https://sfdora.org/">San Francisco Declaration on Research Assessment</a>, established in 2012, calls on the research community to recognise and reward various research outputs, beyond just publication. The <a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000737">Hong Kong Principles</a>, formulated and endorsed at the 6th World Conference in Research Integrity in 2019, encourage research evaluations that incentivise responsible research practices while minimise perverse incentives that drive practices like purchasing authorship or falsifying data.</p>
<p>These issues, as well as others related to protecting the integrity of research and building trust in it, will also be discussed during the <a href="https://wcri2024.org/">8th World Conference on Research Integrity</a> in Athens, Greece in June this year. </p>
<h2>Openness</h2>
<p>Practices under the umbrella of “<a href="https://www.fosteropenscience.eu/content/what-open-science-introduction">Open Science</a>” will be pivotal to making the research process more transparent and researchers more accountable. Open Science is the umbrella term for a movement consisting of initiatives to make scholarly research more transparent and equitable, ranging from open access publication to citizen science.</p>
<p>Open Methods, for example, involves the <a href="https://www.cos.io/initiatives/prereg">pre-registration</a> of a study design’s essential features before its start. A <a href="https://www.cos.io/initiatives/registered-reports">registered report</a> containing the introduction and methods section is submitted to a journal before data collection starts. It is subsequently accepted or rejected based on the relevance of the research, as well as the methodology’s strength.</p>
<p>The added benefit of a registered report is that reviewer feedback on the methodology can still change the study methods, as the data collection hasn’t started. Research can then begin without pressure to achieve positive results, removing the incentive to tweak or falsify data. </p>
<h2>Peer review</h2>
<p>Peer reviewers are an important line of defence against the publication of fatally flawed or fake papers. In this system, quality assurance of a paper is done on a completely voluntary and often anonymous basis by an expert in the relevant field or subject. </p>
<p>However, the person doing the review work receives no credit or reward. It’s crucial that this sort of “invisible” work in academia be recognised, celebrated and included among the criteria for promotion. This can contribute substantially to detecting questionable research practices (or worse) before publication.</p>
<p>It will incentivise good peer review, so fewer suspect articles pass through the process, and it will also open more paths to success in academia – thus breaking up the toxic publish-or-perish culture.</p>
<p><em>This article is based on <a href="https://www.youtube.com/watch?v=64UTTIJr6wk">a presentation</a> given by the lead author at Stellenbosch University, South Africa on 12 February 2024. Natalie Simon, a communications consultant specialising in research who is part of the communications team for the 8th World Conference on Research Integrity and is also currently completing an MPhil in Science and Technology Studies at Stellenbosch University, co-authored this article.</em></p><img src="https://counter.theconversation.com/content/224650/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lex Bouter is the founding chair of the World Conferences on Research Integrity Foundation and co-chair of the 8th WCRI in Athens, 2-5 June 2024.</span></em></p>Paper mills rely on the desperation of researchers to fuel their business model.Lex Bouter, Professor of Methodology and Integrity, Vrije Universiteit AmsterdamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2206352024-02-23T13:50:45Z2024-02-23T13:50:45ZEarly COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create<figure><img src="https://images.theconversation.com/files/577159/original/file-20240221-22-ttfzl.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2070%2C1449&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The pandemic spurred an increase in COVID-19 research, much of it with methodological holes.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/coronavirus-damage-royalty-free-image/1266909460">Andriy Onufriyenko/Moment via Getty Images</a></span></figcaption></figure><p>Early in the COVID-19 pandemic, researchers <a href="https://doi.org/10.1038/d41586-020-03564-y">flooded journals</a> with studies about the then-novel coronavirus. Many publications streamlined the peer-review process for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.</p>
<p>However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public health journals listed in Google Scholar, I found that many of these studies used <a href="https://doi.org/10.1162/qss_a_00257">poor quality methods</a>. <a href="https://doi.org/10.1186/s12874-020-01190-w">Several other</a> <a href="https://doi.org/10.1038/s41467-021-21220-5">reviews of</a> <a href="https://doi.org/10.1371/journal.pone.0241826">studies published</a> in medical journals have also shown that much early COVID-19 research used poor research methods.</p>
<p>Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar <a href="https://doi.org/10.3390/ijerph17051729">used data</a> from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely chance occurrences. However, this study has been cited <a href="https://scholar.google.com/citations?hl=en&vq=med_publichealth&view_op=list_hcore&venue=kEa56xlDDN8J.2023">over 11,000 times</a>.</p>
<p>A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not <a href="https://doi.org/10.1089/ees.2016.0223">strongly linked to research quality</a>, since researchers and journals can game and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.</p>
<h2>Methodology matters</h2>
<p>I am a <a href="https://scholar.google.com/citations?user=X1o1PaQAAAAJ&hl=en">public health researcher</a> with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement <a href="https://theconversation.com/misinformation-is-a-common-thread-between-the-covid-19-and-hiv-aids-pandemics-with-deadly-consequences-187968">spreading misinformation</a> about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.</p>
<p>The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to provide valid and useful information. </p>
<p>For example, researchers have <a href="https://www.sfu.ca/%7Epalys/Campbell&Stanley-1959-Exptl&QuasiExptlDesignsForResearch.pdf">known for decades</a> that for studies evaluating the effectiveness of an intervention, a <a href="https://www.britannica.com/science/control-group">control group</a> is needed to know whether any observed effects can be attributed to the intervention. </p>
<p><a href="https://doi.org/10.1111/dmcn.15719">Systematic reviews</a> pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will cover all the available evidence and tell a reader which is worth attending to and which is not.</p>
<p>Certain types of studies, such as one-time surveys of convenience samples that aren’t representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable <a href="https://doi.org/10.1017/S0033291720005127">caused a particular outcome</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WUErib-fXV0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Systematic reviews involve thoroughly identifying and extracting information from existing research.</span></figcaption>
</figure>
<p>All <a href="https://www.equator-network.org/">study designs have standards</a> that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.</p>
<p><a href="https://doi.org/10.1038/s41467-021-21220-5">Studies comparing</a> <a href="https://doi.org/10.1186/s12916-021-01920-x">COVID-19 papers</a> <a href="https://doi.org/10.1371/journal.pone.0241826">with non-COVID-19</a> papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to reporting standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against <a href="https://doi.org/10.1136/bmjebm-2020-111584">dredging the data</a> to find “statistically significant” results that could be selectively reported.</p>
<p>Such methodological problems were likely overlooked in the <a href="https://doi.org/10.1038/s41562-020-0911-0">considerably shortened</a> <a href="https://doi.org/10.1162/qss_a_00076">peer-review process</a> for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be <a href="https://doi.org/10.1038/s41467-021-21220-5">13 days, compared with 110 days</a> in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of <a href="https://doi.org/10.1162/qss_a_00257">about three weeks</a>.</p>
<h2>Publish-or-perish culture</h2>
<p>These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.</p>
<p>Journals tend to favor <a href="https://doi.org/10.1371/journal.pone.0010068">positive, “novel” findings</a>: that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an opportunity for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person with head in hands, elbows planted on stacks of paperwork and books littering a desk, glasses and laptop on the side" src="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many researchers feel pressure to publish papers in order to advance their careers.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/surrounded-by-work-royalty-free-image/637293916">South_agency/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Academics have worked in a <a href="https://doi.org/10.1089/ees.2016.0223">publish-or-perish</a> <a href="https://doi.org/10.1177/1745691612459058">incentive system</a> for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The <a href="https://theconversation.com/misinformation-is-a-common-thread-between-the-covid-19-and-hiv-aids-pandemics-with-deadly-consequences-187968">flood of mixed-quality COVID-19 information</a> afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.</p>
<p>Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of <a href="https://doi.org/10.1002/leap.1566">today’s online</a> <a href="https://doi.org/10.1001/jama.2023.3212">mega-journals</a> publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.</p>
<h2>Healthy criticism</h2>
<p>Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an <a href="https://doi.org/10.1057/palgrave.jors.2602573">open society</a> capable of solving complex problems such as a worldwide pandemic.</p>
<p>Publishing a large volume of misinformation disguised as science during a pandemic <a href="https://doi.org/10.1073/pnas.1912444117">obscures true and useful knowledge</a>. At worst, this can lead to bad public health practice and policy. </p>
<p>Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves <a href="https://doi.org/10.1371/journal.pmed.1001747">critically examining the quality</a> of a study’s designs, statistical methods, reproducibility and transparency, not the <a href="https://doi.org/10.1016/j.jclinepi.2021.05.018">number of times it has been cited</a> or tweeted about.</p>
<p>Science depends on a <a href="https://doi.org/10.1007/s10654-023-01049-6">slow, thoughtful and meticulous approach</a> to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two scientists pipetting liquids under a fume hood, with another scientist in the background examining a sample" src="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=423&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=423&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=423&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=532&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=532&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=532&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rigorous science requires careful deliberation and attention, not haste.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/female-scientist-drops-liquid-into-test-tube-royalty-free-image/127871289">Assembly/Stone via Getty Images</a></span>
</figcaption>
</figure>
<p>Public health heavily draws upon disciplines that are <a href="https://doi.org/10.1038/526182a">experiencing</a> <a href="https://doi.org/10.1177/1745691612462588">replication</a> <a href="https://doi.org/10.1371/journal.pmed.0020124">crises</a>, such as psychology, biomedical science and biology. It is similar to these disciplines <a href="https://doi.org/10.1146/annurev-statistics-031219-041104">in terms of its</a> incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.</p>
<p>Reexamining how the discipline rewards its scholars and assesses their scholarship can help it better prepare for the next public health crisis.</p><img src="https://counter.theconversation.com/content/220635/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dennis M. Gorman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Pressure to ‘publish or perish’ and get results out as quickly as possible has led to weak study designs and shortened peer-review processes.Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2062352023-06-20T20:13:46Z2023-06-20T20:13:46ZScientific fraud is rising, and automated systems won’t stop it. We need research detectives<figure><img src="https://images.theconversation.com/files/530541/original/file-20230607-19-8xsepz.jpeg?ixlib=rb-1.1.0&rect=0%2C592%2C3994%2C3083&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Unsplash</span></span></figcaption></figure><p>Fraud in science is alarmingly common. Sometimes researchers <a href="https://retractionwatch.com/2016/03/31/neuroscientist-pleads-guilty-to-fraud-gets-two-year-suspended-sentence/">lie about results and invent data</a> to win funding and prestige. Other times, researchers might pay to stage and publish entirely bogus studies to win an undeserved pay rise – fuelling a “paper mill” industry worth <a href="https://ioppublishing.org/news/increasing-confidence-and-trust-in-research/">an estimated €1 billion a year</a>.</p>
<p>Some of this rubbish can be easily spotted by peer reviewers, but the peer review system has become badly stretched by ever-rising paper numbers. And there’s a new threat, as more sophisticated AI is able to <a href="https://www.nature.com/articles/d41586-023-01780-w">generate plausible scientific data</a>. </p>
<p>The latest idea among academic publishers is to use automated tools to screen all papers submitted to scientific journals for telltale signs. However, some of these tools are easy to fool.</p>
<p>I am part of a group of multidisciplinary scientists working to tackle research fraud and poor practice using <a href="https://en.wikipedia.org/wiki/Metascience">metascience</a> or the “science of science”. Ours is a new field, but we already have our own <a href="https://aimos.community/">society</a> and our members have worked with funders and publishers to investigate improvements to research practice.</p>
<h2>The limits of automated screening</h2>
<p>The problems with automated screening are highlighted by a <a href="https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common">new screening tool</a> publicised last month. The tool suggested around one in three neuroscience papers might be fraudulent. </p>
<p>However, this tool detects suspected fraud simply by flagging authors with a non-institutional email (such as gmail.com) and with a hospital affiliation. While this could catch some fraud, it will also flag many honest researchers, and the tool flagged a whopping 44% of genuine papers as potentially fake. </p>
<p>One big problem with simple screening tools is that fraudsters will quickly find workarounds. For instance, telling their clients to use their institutional email address to submit the paper. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/research-fraud-the-temptation-to-lie-and-the-challenges-of-regulation-58161">Research fraud: the temptation to lie – and the challenges of regulation</a>
</strong>
</em>
</p>
<hr>
<p>Given the amount of money to be made, fraudsters have the time and motivation to find workarounds to automated screening systems.</p>
<p>This is not to say automated tools have no place. They have been used successfully to <a href="https://retractionwatch.com/2017/01/19/turned-cancer-researcher-literature-watchdog/">check papers for faulty experiments</a>, and to hunt for pilfered text reworked to <a href="https://retractionwatch.com/2021/07/19/tortured-phrases-lost-in-translation-sleuths-find-even-more-problems-at-journal-that-just-flagged-400-papers/">avoid plagiarism checkers</a>.</p>
<p>A <a href="https://www.stm-assoc.org/stm-integrity-hub/">project</a> launched by the International Association of Scientific, Technical and Medical Publishers which aims to use screening tools to tackle fraud is also welcome. But automated tools cannot be the only line of defence. </p>
<h2>A crowdfunded detective</h2>
<p>There are remarkably few people who hunt through published research to detect scientific fraud. Perhaps the best known is the Dutch microbiologist Elisabeth Bik, who is an expert at catching manipulated images in scientific papers.</p>
<p>Bik has single-handedly caught multiple massive fraudsters, with the dodgy papers eventually being retracted from the scientific record. </p>
<p>Bik’s work is a tremendous public service. However, she isn’t paid by a university or a scientific publisher. Her detective work – which has seen her face <a href="https://www.theguardian.com/science/2021/may/22/world-expert-in-scientific-misconduct-faces-legal-action-for-challenging-integrity-of-hydroxychloroquine-study">harassment and court cases</a> – is <a href="https://www.bbc.co.uk/programmes/m001lqvg">crowd funded</a>.</p>
<p>With the billions of dollars in the publishing world, can’t a few million be found for quality control? In the meantime, one of our best-known lines of defence relies on good will and passion.</p>
<p>In Australia, spending just 0.1% of the annual scientific research budget on quality control would be A$12 million per year. This would be enough to fund a whole office of detectives and also training for researchers in good scientific practice, increasing the return on investment for the remaining 99.9% of the annual budget. </p>
<h2>Call the fraud police</h2>
<p>A solution – or at least a partial one – seems obvious: somebody should employ lots of people like Bik to check quality. However, “somebody should” is a dangerous phrase, because it could easily mean nobody will.</p>
<p>Research funders wait for scientific publishers to take action. Publishers expect universities and other institutions to do something. Those institutions in turn look to government for a solution.</p>
<p>Meanwhile, paper mills are happily making a mint, and the world’s pool of scientific evidence is becoming increasingly contaminated by rubbish.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fabricating-and-plagiarising-when-researchers-lie-33732">Fabricating and plagiarising: when researchers lie</a>
</strong>
</em>
</p>
<hr>
<p>Quality control systems need not be expensive, as we don’t need to check every paper in detail. Random spot checks might be effective. </p>
<p>Say one in every 300 submissions gets checked by the “fraud police”. That’s a small probability, but people are notoriously bad at judging small probabilities, as proved by the popularity of lotteries.</p>
<p>There would also need to be consequences, such as notifying all the institutions and funders involved, and an expectation of a rapid response. If an institution were involved in multiple cases, publishers could flag all papers from that institution for extra checks. </p>
<h2>Publicity would be a good start</h2>
<p>Of course, this could disadvantage honest researchers from that institution – but personally I would like to know if my colleagues had been submitting fraud. And given institutions rarely publicise the wrongdoing of their own staff, it may be the first I hear about it. </p>
<p>If honest researchers pressure their institutions to act, it would be a tremendous change. Publishers can’t be the only line of defence in tackling fraud. </p>
<p>Funding for <a href="https://bmcresnotes.biomedcentral.com/articles/10.1186/s13104-022-06080-6">stronger screening systems</a> is a great start, but we also need to spend money on people. We need to turn the arms race with the fraudsters into a brains race, because we have the better brains.</p><img src="https://counter.theconversation.com/content/206235/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Adrian Barnett receives funding from National Health and Medical Research Council. He is affiliated with the Association for Interdisciplinary Metaresearch & Open Science. </span></em></p>The only way for science to fight the booming fake research industry is to fund smart, dedicated people to stay ahead of the fraudsters.Adrian Barnett, Professor of Statistics, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1818822022-05-05T16:38:02Z2022-05-05T16:38:02ZResearchers should be assessed on quality not quantity: here’s how<figure><img src="https://images.theconversation.com/files/460797/original/file-20220502-21-uy3xxu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Researchers need to be assessed on every aspect of their work, no matter where it takes place.</span> <span class="attribution"><span class="source">Photo by marlenefrancia/Shutterstock</span></span></figcaption></figure><p>How do you assess academic researchers for promotion or funding? This question has become ever more central in higher education settings since the 1980s saw substantial growth in investment in research. This significantly increased the number of researchers in the academic workforce and the need to assess their output for employment, promotion and other career advancements.</p>
<p>One response to the need to “scale up” researcher assessments was to introduce publication metrics. These are counts of publications and citations and more complex measures like the <a href="https://libguides.lib.uct.ac.za/tracking_your_academic_footprint/h-index">Hirsch Index</a> and the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4150161/">Impact Factor</a>. These allowed for relatively easy assessment and comparison of researchers’ careers. They were seen to be both more objective and less time consuming than traditional assessments in which narrative bio sketches were peer reviewed subjectively.</p>
<p>But it’s now widely accepted that the metrics approach to assessment can negatively affect the research system and research outputs. It values quantity over quality and creates perverse incentives that easily lead to <a href="http://rdcu.be/mPZT">questionable research practices</a>. Relying too much on metrics has led to researchers engaging in practices that reduce the trust in, and quality of, research. These include “<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5178044/#">salami slicing</a>” (the spreading of study results over as many publications as possible to ensure numerous publications) and <a href="https://doi.org/10.1017/S0033291718001873">selective reporting</a>. </p>
<p>The pressure to publish also makes researchers vulnerable to predatory journals. Because having many publications and many citations is made so important, the pressure to cut corners is high. This can lead to low quality flawed research that typically overstates effects and downplays limitations. When the findings of that research are implemented harm is done to patients, society or the environment.</p>
<p>Researcher assessment criteria and practices need to be overhauled. We believe the best way to do this is using the <a href="https://wcrif.org/guidance/hong-kong-principles">Hong Kong Principles on Assessing Researchers</a> which emerged from the 6th World Conference on Research Integrity in 2019. The principles were developed to reinforce the need to award researchers for practices that <a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000737">promote trustworthy research</a>. “Trustworthy research” is relevant, valid and is done in a transparent and accountable way without researchers being distracted by other interests.</p>
<p>These principles move beyond merely questioning the use of research metrics for assessment. Instead they offer alternative indicators to assess researchers and reward behaviour. The idea is to foster research integrity and responsible conduct of research. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-developing-countries-are-particularly-vulnerable-to-predatory-journals-86704">Why developing countries are particularly vulnerable to predatory journals</a>
</strong>
</em>
</p>
<hr>
<p>We believe they should be widely adopted. But there are gaps that must be addressed to ensure that the principles don’t leave institutions in the global south, including those in Africa, out in the cold.</p>
<h2>A possible way forward</h2>
<p>The Hong Kong Principles and similar initiatives are gaining traction and changing researcher assessment in many countries and institutions worldwide.</p>
<p>The <a href="https://wcrif.org/guidance/hong-kong-principles">principles</a> are:</p>
<ul>
<li><p>Assess researchers on responsible practices from conception to delivery. That includes the development of the research idea, research design, methodology, execution and effective dissemination.</p></li>
<li><p>Value the accurate and transparent reporting of all research, regardless of the results.</p></li>
<li><p>Value the practices of open science (open research) such as open methods, materials and data.</p></li>
<li><p>Value a broad range of research and scholarship, such as replication, innovation, translation and synthesis.</p></li>
<li><p>Value a range of other contributions to responsible research and scholarly activity, such as peer review for grants and publications, mentoring, outreach and knowledge exchange.</p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-approach-the-revolution-in-scholarly-publishing-116091">How to approach the revolution in scholarly publishing</a>
</strong>
</em>
</p>
<hr>
<p>The principles also include a strong focus on practical implementation, with an understanding that this is not a straightforward process. They call for the sharing of practices around implementation.</p>
<h2>The challenge of implementation</h2>
<p>The movement to change the way researchers are measured should undoubtedly be embraced. But it’s important this be done in a way that doesn’t leave poorly resourced institutions in the global south behind. Even for researchers in the global north, the sorts of new expectations contained in the principles can be frustrating, because they require additional time and resources.</p>
<p>The most obvious example of this is <a href="https://www.nature.com/articles/d41586-022-00724-0">Principle Three</a>: value the practices of open science. A researcher cannot do this alone. They need to be supported by adequate infrastructure, skills, funding, and even discipline-specific training to ensure their data are published in a way that is FAIR (findable, accessible, interoperable and reusable). There are <a href="https://www.nicis.ac.za/dirisa/">some initiatives</a> in Africa to build this kind of infrastructure and skills. But this demand may prove an insurmountable challenge for many African researchers.</p>
<p>African institutions often have a shortage of skilled research management staff to support researchers and ensure their research practices remain in line with international trends. This means researchers from under-resourced institutions may risk losing opportunities as their institutions fail to keep up with changing international demands. </p>
<p>International funding body Wellcome, for instance, <a href="https://wellcome.org/grant-funding/guidance/open-access-guidance/open-access-policy">has stated</a> that all the institutions it funds must publicly commit to responsible and fair research assessment by signing up to the <a href="https://sfdora.org/">San Francisco Declaration on Research Assessment</a>, the <a href="http://www.leidenmanifesto.org/">Leiden Manifesto</a> or an equivalent. Researchers and organisations who do not comply with this policy will be subject to appropriate sanctions. That includes not having new grant applications accepted or their funding being suspended.</p>
<p>African researchers may join international collaborations because they see this as important for their own careers and for accessing the funding needed to unpack important questions within the communities in which they work. Funders and research team leaders from wealthier countries must ensure that the research systems needed to support, realise and adequately acknowledge those from less resourced places are in place. If they are not, capacity development must be funded and implemented as needed. </p>
<h2>A balance</h2>
<p>This issue will be among those tabled at the <a href="https://wcri2022.org/">7th World Conference of Research Integrity</a> in Cape Town, South Africa from 29 May to 1 June. Its theme, Fostering Research Integrity in an Unequal World, offers an ideal opportunity to discuss how best to balance the necessity of changing research assessment practices with the risk to poorer institutions and less resourced researchers. A special symposium will be dedicated to the implementation of the Hong Kong Principles in an African context.</p><img src="https://counter.theconversation.com/content/181882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lyn Horn receives funding from US Office of Research Integrity, the South African Department of Science and Innovation and the South African National Research Foundation. This funding is for the 7th World Conference on Research Integrity.
She is currently on the international advisory board, as a research ethics advisor, for four different clinical trials in the field of HIV and TB research. </span></em></p><p class="fine-print"><em><span>Lex Bouter is the chair of the World Conferences on Research Integrity Foundation and one of the cochairs of the 5th, 6th and 7th WCRI . He is also one of the coauthors of the Hong Kong Principles.</span></em></p>The movement to change the way researchers are measured should undoubtedly be embraced.Lyn Horn, Director, Office of Research Integrity, University of Cape TownLex Bouter, Professor of Methodology and Integrity, Vrije Universiteit AmsterdamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1760192022-02-02T19:10:59Z2022-02-02T19:10:59ZAustralia needs an Office for Research Integrity to catch up with the rest of the world<figure><img src="https://images.theconversation.com/files/443679/original/file-20220201-20-1szz1z2.jpg?ixlib=rb-1.1.0&rect=42%2C0%2C4760%2C3163&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The Swedish government established a national Research Misconduct Board in 2020, after concluding institutions couldn’t be trusted to investigate allegations of serious research misconduct themselves. This followed <a href="https://forbetterscience.com/2019/10/24/karolinskas-haunted-leadership/">botched investigations</a> into the conduct of surgeon Paolo Macchiarini, who transplanted experimental artificial tracheas into 20 patients, 17 of whom later died. His employer, the Karolinska Institute, had <a href="https://drive.google.com/open?id=0By2HqPi4t2RbUnlGcGhxYU1DVGI2d0JGWXJhaThvYzF1bGxJ">initially cleared him</a>. Later independent investigations found he had committed misconduct. </p>
<p>Ultimately, both the vice chancellor and dean of research at the institute <a href="https://www.nature.com/articles/nature.2016.19374">lost their jobs</a>. The <a href="https://www.science.org/content/article/top-nobel-prize-administrator-resigns-wake-macchiarini-scandal">secretary-general</a> of the <a href="https://ki.se/en/about/the-nobel-assembly-at-karolinska-institutet">Nobel Assembly</a> at Karolinska, which issues the Nobel Prize in Physiology or Medicine, also resigned. The government <a href="https://sverigesradio.se/artikel/6510837">dismissed</a> the entire university board. But Macchiarini’s patients paid the heaviest price.</p>
<p>Sweden is just the most recent of more than 20 European nations that have national offices for research integrity. So do the UK, US, Canada, Japan and China. Australia, which still lacks an Office for Research Integrity, is being left behind. </p>
<p>Multiple <a href="https://www.abc.net.au/news/2014-12-12/university-of-queensland-professor-on-fraud-charges/5964476">recent</a> <a href="https://www.abc.net.au/news/2022-01-31/on-the-trail-of-dodgy-academic-research/100788052">reports</a> of <a href="https://www.abc.net.au/news/2021-11-24/qld-professor-mark-smyth-stood-down-qimr-investigation/100646988">allegations</a> of <a href="https://www.smh.com.au/national/macquarie-university-considers-investigating-suspected-research-fraud-20211214-p59hfr.html">research</a> <a href="https://www.smh.com.au/national/university-investigates-claims-of-research-misconduct-in-studies-on-ageing-20211013-p58zlx.html">fraud</a> in Australia show the urgent need for an independent national regulator.</p>
<h2>How does Australia handle research misconduct?</h2>
<p>Australia’s system for handling allegations of research misconduct resembles the one Sweden abandoned. We persist with a self-regulation model. Yet royal commission after royal commission has shown self-regulation does not work in the <a href="https://www.royalcommission.gov.au/banking">financial sector</a>, with <a href="https://www.royalcommission.gov.au/child-abuse">institutions that care for children</a>, or for <a href="https://media.opengov.nsw.gov.au/pairtree_root/c5/53/7b/ab/e0/5c/45/7c/aa/3b/25/85/d0/f5/23/ab/obj/RCPS_Report_Volume_2.pdf">police forces</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/there-is-a-problem-australias-top-scientist-alan-finkel-pushes-to-eradicate-bad-science-123374">'There is a problem': Australia's top scientist Alan Finkel pushes to eradicate bad science</a>
</strong>
</em>
</p>
<hr>
<p>Research in Australia funded by the National Health and Medical Research Council (<a href="https://www.nhmrc.gov.au/about-us/who-we-are">NHMRC</a>) or the Australian Research Council (<a href="https://www.arc.gov.au/about-arc">ARC</a>) must comply with the <a href="https://www.nhmrc.gov.au/about-us/publications/australian-code-responsible-conduct-research-2018">Australian Code for the Responsible Conduct of Research</a>. </p>
<p>The <a href="https://www.nhmrc.gov.au/about-us/publications/australian-code-responsible-conduct-research-2007">2007 version</a> of this code required independent, multi-person inquiry panels to handle allegations of serious misconduct. Findings were to be made public. Appeals could be made if new evidence arose. </p>
<p>In 2018 the code was changed. The changes meant:</p>
<ul>
<li>a single person from the same institution can now carry out inquiries</li>
<li>secrecy must be maximised, with no requirement for public reports</li>
<li>appeals can only be considered based on process and not on evidence, substance or merit.</li>
</ul>
<p>One stunning change to the code – worthy of the political satire <a href="https://en.wikipedia.org/wiki/Yes_Minister">Yes Minister</a> – was to make the term “research misconduct” optional. Institutions can now make up their own definition or dispense with the term entirely – and thus be rendered free of research misconduct in perpetuity!</p>
<p>Scientists are human, and there will be ones who do the wrong thing, just as there are dishonest individuals in all professions. And Australian scientists are no more honest or dishonest than those in other countries. However, we rarely hear of cases of research misconduct, because the reflex action of institutions is to try to protect their reputations by covering things up. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1158080915253452802"}"></div></p>
<h2>What needs to be done?</h2>
<p>What institutions should do instead is enhance their reputations by handling cases rigorously, fairly and openly. At the 2010 <a href="https://wcrif.org/">World Conference on Research Integrity</a>, a panel member was asked if she would ever consider joining a university that had had a case of research misconduct. The eminent expert said she would never join a university that had not had a case, because that meant they were either ignoring cases, or were not doing enough research. </p>
<p>We need to recognise and applaud the whistle-blowers who report research misconduct and those institutions that do take a rigorous stand. The <a href="https://www.abc.net.au/news/2014-04-04/uq-research-retraction-barwood-murdoch/5368800">University of Queensland</a> and <a href="https://www.smh.com.au/national/top-scientist-referred-to-corruption-watchdog-over-alleged-research-misconduct-20211123-p59bar.html">QIMR Berghofer Medical Research Institute</a> have set the example in recent cases. But their tasks would be much easier if they could refer cases to an independent national Office for Research Integrity.</p>
<p>Australia needs an Office for Research Integrity to handle cases in all kinds of scholarly practice, not just in biomedical research, but also in physics, engineering and the humanities. In his comprehensive book <a href="https://global.oup.com/academic/product/scholarly-misconduct-9780198755401?cc=au&lang=en&">Scholarly Misconduct: Law, Regulation and Practice</a>, Ian Freckelton QC concluded: </p>
<blockquote>
<p>“What has become clear is that the maladies afflicting scholarship cannot be dealt with wholly internally within universities and research bodies […] What is required is the creation by government of external bodies. </p>
<p>"Assertions that [allegations of research misconduct and conflict of interest] can be dealt with adequately by internal investigations are not credible given what has occurred in the recent past. Legal and health professions are no longer permitted in many countries to self-regulate. External, independent decision-making is necessary for community confidence.”</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/research-fraud-the-temptation-to-lie-and-the-challenges-of-regulation-58161">Research fraud: the temptation to lie – and the challenges of regulation</a>
</strong>
</em>
</p>
<hr>
<h2>Take the best from overseas</h2>
<p>There is no need for Australia to re-invent the wheel. We should take the best from the various offices for research integrity and ombudsmen overseas, and construct the very best office here in Australia. This office would:</p>
<ul>
<li>allow whistle-blowers to be heard</li>
<li>have no conflicts of interest</li>
<li>be able to draw on the necessary experience and specialist expertise</li>
<li>be able to act rapidly and transparently.</li>
</ul>
<p>What is unusual about the call for such a watchdog in Australia is that it is coming from the researchers themselves. They range from whistle-blowers who have direct experience, early career researchers who struggle to get funded, to established scientists such as those in the Australian Academy of Science who are now <a href="https://www.smh.com.au/national/suspected-fraud-cases-prompt-calls-for-research-integrity-watchdog-20211214-p59hhh.html">leading the push</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1477414352085041152"}"></div></p>
<p><a href="https://www.sportintegrity.gov.au/">Sport Integrity Australia</a> manages misconduct in sport. We now need bipartisan support for an Australian Office for Research Integrity to handle the <a href="https://www.britannica.com/biography/Lance-Armstrong">Lance Armstrongs</a> of Australian research.</p><img src="https://counter.theconversation.com/content/176019/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>David Vaux has received grant funding from the NHMRC. He is currently an Honourary Fellow of The Walter and Eliza Hall Institute, and a member of the Center for Scientific Integrity (NY), which functions as the board for Retraction Watch.</span></em></p>Australian scientists are no more honest or dishonest than those in other countries that have national bodies to investigate research fraud. We have a sport integrity watchdog but not one for research.David Vaux, Medical Researcher, Walter and Eliza Hall InstituteLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1211222019-07-31T07:26:49Z2019-07-31T07:26:49ZNo, it’s not OK for the government to use your prescription details to recruit you for a study<figure><img src="https://images.theconversation.com/files/286343/original/file-20190731-186833-1xufiuy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">When pursuing information for big data projects, the risks to individual autonomy and privacy are easily overlooked.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/download/confirm/468217937?src=D4-BMKyV3jURpD-doVNm3w-1-4&studio=1&size=huge_jpg">Shutterstock</a></span></figcaption></figure><p>The department of human services (DHS) is under scrutiny this week after <a href="https://www.theage.com.au/healthcare/medicare-data-used-to-recruit-people-with-bipolar-for-research-20190722-p529k9.html">the Nine papers revealed</a> the department sent letters to 50,000 patients who were previously prescribed lithium. DHS was seeking to recruit the recipients into a non-government study on bipolar disorder. </p>
<p>Psychiatrists raised concerns after receiving complaints from patients who had received the letters and accused their psychiatrists of breaching patient confidentiality. </p>
<p>The doctors didn’t breach confidentiality. Instead, DHS, which delivers social and health payments and services, including via Medicare, used data held by Medicare to contact patients who had previously been prescribed lithium on behalf of researchers from the QIMR Berghofer Medical Research Institute. </p>
<p>Patients were invited to complete an online survey, and potentially provide <a href="https://networkcultures.org/blog/2019/01/22/not-as-good-as-gold-goodness-of-genomic-data/">DNA samples</a>, to identify genetic markers associated with bipolar disorder.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/after-the-medicare-breach-we-should-be-cautious-about-moving-our-health-records-online-80472">After the Medicare breach, we should be cautious about moving our health records online</a>
</strong>
</em>
</p>
<hr>
<p>Research using big datasets is increasingly common. Big data, combined with genomics, is an extremely powerful research tool, offering tantalising opportunities to gain insight into major challenges to human health. It’s reliant on collection of data from many different individuals. </p>
<p>But in pursuing population-scale benefits, the risks to individual autonomy and privacy are easily overlooked – or worse, they’re disregarded. </p>
<h2>Has the ethics framework been disregarded?</h2>
<p>This week’s revelations highlight issues with the existing ethical and legal framework governing data access. </p>
<p>The <a href="https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018">National Health and Medical Research Council’s national statement</a> governs the conduct of ethical research in Australia. It outlines the structure and approval process of human research ethics committees, and identifies issues that committees must consider when deciding whether to grant approval. That includes how researchers propose to recruit research participants. </p>
<p>Researchers often struggle to recruit sufficient participants to their research. Participants must agree to participate voluntarily, without coercion. For some research, recruitment may be particularly challenging. The requirements imposed on participants may be excessively onerous, inconvenient or just distasteful. </p>
<h2>Why is DHS accessing this data?</h2>
<p>Some people in the community are especially vulnerable to breaches of confidentiality and privacy due to social stigma. This might include people with a mental illness, sexually transmitted disease, criminal conviction or parenting order. Yet DHS may hold or access data on these vulnerable groups.</p>
<p>Datasets such as those overseen by DHS provide researchers with a goldmine of potential participants who can be approached more directly. In this case, they were approached by DHS itself on behalf of researchers. This decision to contact patients on behalf of the researchers is ethically and legally questionable. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/a-new-model-for-research-ethics-reviews-38296">A new model for research ethics reviews</a>
</strong>
</em>
</p>
<hr>
<p>While DHS does release statistical information from the various schemes it administers for research, <a href="https://www.humanservices.gov.au/organisations/about-us/publications-and-resources/privacy-policy">DHS’s privacy policy</a> states it will not release identifiable or identifying information for research unless the research has ethics committee approval, and the person consents. That restriction is fundamental to <a href="https://www.legislation.gov.au/Details/C2019C00025">Australian privacy law</a> and <a href="https://www.pmc.gov.au/sites/default/files/publications/aust_govt_public_data_policy_statement_1.pdf">policy</a>.</p>
<p>DHS hasn’t released identifiable patient data. Instead, it has used personal data collected through prescriptions to contact patients on behalf of a third party – in this case the researchers. Implicitly, its mail-out signals everyone on the list has a specific medical condition.</p>
<p>Problematically, DHS’s privacy policy does not indicate it will collect data for this purpose, or use collected data in this way.</p>
<h2>It’s about respect, not convenience</h2>
<p>Given the significance of privacy and the government’s <a href="https://ogpau.pmc.gov.au/">supposed commitment to transparency</a>, many questions remain unanswered:</p>
<p>Which committee granted ethics approval for the research? Did that committee have the necessary expertise in data access and privacy law to make that determination, as required under the national statement?</p>
<p>How did the researchers justify their choice of a recruitment strategy which necessarily compromises the reasonable expectations of patients about how, and for what purposes, their personal information will be used by DHS? </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/286357/original/file-20190731-186819-t5t139.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">People should be able to expect details shared confidentially with medical providers not be used in this way.</span>
<span class="attribution"><span class="source">From shutterstock.com</span></span>
</figcaption>
</figure>
<p>Did DHS advise the relevant ethics committee that it was an agent for the researchers? How did DHS decide to use data in this way? Who made that determination? What process did they rely on to reach that decision? Are they accountable?</p>
<p>Who funded the mail-out? Did DHS fund the printing and distribution of the letters from its own budget? Did the researchers pay them to do so? What effect does any commercial relationship between the researchers and the department have on the independence of the research? </p>
<p>Can other researchers expect comparable support from DHS in future? If not, how will DHS decide between research it deems to be of merit, and research it deems to be unworthy of its assistance? And is that its role?</p>
<h2>DHS shouldn’t just give out your data</h2>
<p>If DHS continues acting for researchers in this way, Australians with data contained in DHS data holdings could be inundated with requests to participate in research, resulting in <a href="https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018#toc__1277">disengagement and research fatigue</a>.</p>
<p>DHS’s <a href="https://croakey.org/consumers-and-providers-question-governments-management-of-personal-health-data/">reported response</a> when questioned by journalists was that people could opt-out of further mail-outs by emailing the department. That shifts the burden of good data governance onto consumers, rather than beneficiaries of it, for the sake of bureaucratic convenience.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-prosecutions-for-welfare-fraud-have-declined-in-australia-93961">Why prosecutions for welfare fraud have declined in Australia</a>
</strong>
</em>
</p>
<hr>
<p>The fundamental issue here is one of trust, coercion and consent. People providing their personal information to the government, to access government services including prescribed medications, are doing so through necessity, not choice. Failure to provide the information results in denial of access to the service. </p>
<p>For DHS to use the information provided in a way that is outside both the reasonable expectation of those providing it, and its own privacy policy, is deeply disrespectful.</p><img src="https://counter.theconversation.com/content/121122/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bruce Baer Arnold is affiliated with the Australian Privacy Foundation and OECD Health Data Regulation working parties.</span></em></p><p class="fine-print"><em><span>Wendy Bonython is the legal member of a Commonwealth government department Human Research Ethics Committee. </span></em></p>The government has breached the public’s trust and its own privacy policy by using Medicare data about Australians’ prescribing habits to recruit participants for a study.Bruce Baer Arnold, Assistant Professor, School of Law, University of CanberraWendy Bonython, Associate Professor of Law, Bond UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1126492019-03-06T11:38:19Z2019-03-06T11:38:19ZOpioid crisis shows partnering with industry can be bad for public health<figure><img src="https://images.theconversation.com/files/262289/original/file-20190305-48423-1nk343s.jpg?ixlib=rb-1.1.0&rect=409%2C122%2C4800%2C3268&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What is each partner looking to get?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/doctor-receiving-patient-office-296666489">Africa Studio/Shutterstock.com</a></span></figcaption></figure><p>“Show me the bodies!” someone demanded at the end of my lecture a few years ago.</p>
<p>As a <a href="https://scholar.google.com/citations?user=MpKuUlkAAAAJ&hl=en&oi=ao">scholar of public health ethics, law and policy</a>, I had just warned an audience of professors and university administrators about the perils of partnering with, or taking money from, corporations – <a href="https://global.oup.com/academic/product/the-perils-of-partnership-9780190907082">a common practice in public health research and policymaking</a>.</p>
<p>It’s not always possible to prove harm like that, I said. But there are other reasons for government, the academy and public health organizations to maintain arm’s length relationships with corporations. Among them, preserving integrity and public trust.</p>
<p>As I document extensively in my <a href="https://global.oup.com/academic/product/the-perils-of-partnership-9780190907082">book on corporate influence in public health</a>, partnerships distort research agendas, not merely of individual researchers but of entire fields of research. They also reinforce the framing of public health problems and their solutions in ways that are most favorable to the corporate partners.</p>
<p>These concerns are most acute when corporations are creating or exacerbating a public health problem. Think of a soda company <a href="https://www.parklives.com/">sponsoring exercise initiatives</a> to burnish its reputation and deflect attention from the role of its brands in the obesity epidemic. But close relationships with corporations can be problematic even when companies are working on medicines or other potential solutions to health problems.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/262291/original/file-20190305-48420-8prxca.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Not surprisingly, opioid manufacturers want to sell more opioids.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Opioids-Governors/9c89d0059c8e4dbba46b71391b21cfdc/1/0">AP Photo/Toby Talbot</a></span>
</figcaption>
</figure>
<p>I failed to convince that skeptical audience member. But recent research found the bodies, or, at the very least, pointed to one place where we might start digging: <a href="https://www.cdc.gov/drugoverdose/epidemic/index.html">the opioid crisis</a>. The new study concluded that drug companies’ marketing of opioids to physicians was “<a href="https://doi.org/10.1001/jamanetworkopen.2018.6007">associated with increased opioid prescribing</a> and, subsequently, with elevated mortality from overdoses.” Recent court filings also suggest that doctors who met with opioid drug reps were <a href="https://www.documentcloud.org/documents/5715954-Massachusetts-AGO-Amended-Complaint-2019-01-31.html">10 times more likely</a> to have prescribed opioids to patients who later died of an overdose.</p>
<p>Marketing to physicians is only one of the strategies employed by opioid manufacturers. Between 2012 and 2017, <a href="https://www.hsgac.senate.gov/imo/media/doc/REPORT-Fueling%20an%20Epidemic-Exposing%20the%20Financial%20Ties%20Between%20Opioid%20Manufacturers%20and%20Third%20Party%20Advocacy%20Groups.pdf">five opioid manufacturers gave nearly US$9 million</a> to 14 patient advocacy groups and medical societies. Although this sum is a drop in the ocean for drug companies with billions of dollars in opioid revenues, these were substantial sums for the recipients. And the companies’ investments paid off.</p>
<p>Many of the groups <a href="https://www.hsgac.senate.gov/imo/media/doc/REPORT-Fueling%20an%20Epidemic-Exposing%20the%20Financial%20Ties%20Between%20Opioid%20Manufacturers%20and%20Third%20Party%20Advocacy%20Groups.pdf">issued guidelines</a> minimizing the addiction risks of prescription opioids. They also <a href="https://www.hsgac.senate.gov/imo/media/doc/REPORT-Fueling%20an%20Epidemic-Exposing%20the%20Financial%20Ties%20Between%20Opioid%20Manufacturers%20and%20Third%20Party%20Advocacy%20Groups.pdf">lobbied extensively</a> to defeat legislation restricting opioid prescribing. When the CDC issued its draft guidelines to limit opioid use in 2016, opposition was significantly higher among <a href="https://doi.org/10.1001/jamainternmed.2016.8471">organizations that had received industry funding</a>.</p>
<p>The most commonly touted solution to financial conflicts of interest is disclosure of the conflict. The <a href="https://www.healthaffairs.org/do/10.1377/hpb20141002.272302/full/">Physician Payments Sunshine Act of 2010</a> requires drug companies to disclose gifts to physicians and teaching hospitals. Democratic senator Claire McCaskill has <a href="https://www.congress.gov/bill/115th-congress/senate-bill/3565">introduced a bill</a> to extend these provisions to cover payments made to patient advocacy groups.</p>
<p>But disclosure, while necessary, is not sufficient for addressing corporate influence in science, medicine and public health. While researching my book, I found plenty of evidence that drug, food and soda companies – among others – weave powerful webs of influence when they support the work of public health agencies, universities, patient advocacy groups and health professional associations.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/262290/original/file-20190305-48444-840lcv.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Government and academia have responsibilities that conflict with corporate profit.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/pillars-black-white-393090355">Brandon Bourdages/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>It’s reasonable to expect corporations to exercise influence to the full extent permitted by law. But I believe governments have a responsibility to insulate themselves from corporate influence. Only by doing so can they meet their obligations to protect and promote public health. And universities should do likewise in order to protect scientific integrity. By inviting companies to partner, government and the academy play into corporate strategies of influence, imperiling their own integrity as well as science and public health.</p>
<p>When the National Institutes of Health launched a partnership initiative to address the opioid crisis in 2017, it <a href="https://www.nih.gov/research-training/medical-research-initiatives/heal-initiative/participant-list-development-safe-effective-non-addictive-pain-treatments">turned to drug companies</a> for guidance. They included an opioid company that <a href="https://www.nytimes.com/2007/05/10/business/11drug-web.html">pleaded guilty in 2007</a> to misleading regulators, doctors and patients about addiction risks and potential for abuse – and then continued its aggressive marketing for another decade, according to <a href="https://www.nytimes.com/2019/02/01/business/purdue-pharma-mckinsey-oxycontin-opiods.html">recent court filings</a>. These documents also indicate that, while running <a href="https://www.statnews.com/2017/12/22/purdue-ad-campaign/">newspaper ads in 2017</a> claiming that it was a “partner” in the fight against the opioid crisis, the company was still working on plans to expand the opioid market.</p>
<p><a href="https://doi.org/10.2105/AJPH.2018.304881">The world needs better options for pain management</a>. And the opioid industry may play a role developing some of these options. But partnering with industry is hazardous – even if, as its director has pledged, the NIH enters these arrangements “<a href="https://www.nih.gov/about-nih/who-we-are/nih-director/statements/statement-public-private-partnerships-part-nih-heal-initiative">with the utmost transparency</a>,” and does not take cash payments.</p>
<p>Money need not change hands for partnerships to create reciprocity and influence, burnish the reputation of drug companies, and defuse support for more effective regulation of the marketing and prescribing of drugs. Collaboration may also lead to the neglect of other potential solutions to the opioid crisis – and other potential pain remedies beyond drug therapies.</p>
<p>If the opioid crisis has taught people anything, it’s that the interests of pharmaceutical companies and public health inevitably diverge. While opioid manufacturers and distributors were making billions of dollars, they were sowing the seeds of a crisis that has contributed to the <a href="https://www.cdc.gov/drugoverdose/data/prescribing.html?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fdrugoverdose%2Fdata%2Foverdose.html">deaths of more than 218,000 Americans</a> and counting. In addition, the total societal <a href="https://www.whitehouse.gov/sites/whitehouse.gov/files/images/The%20Underestimated%20Cost%20of%20the%20Opioid%20Crisis.pdf">costs of the opioid epidemic</a> exceed half-a-trillion dollars per year.</p>
<p>Given these catastrophic costs, policymakers cannot afford to repeat and compound the errors of the past. While tackling pain management and opioid addiction, they must not neglect a third public health challenge: their own “addiction” to partnerships with the private sector. But, before public health officials can wean themselves off these collaborations, they must first acknowledge that they have a problem.</p><img src="https://counter.theconversation.com/content/112649/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan H. Marks does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The interests of pharmaceutical companies and public health are not the same. Industry dollars can distort research agendas, while framing health challenges and solutions in ways that benefit corporations.Jonathan H. Marks, Director of the Bioethics Program and affiliate faculty in Law and International Affairs, Penn StateLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/849462017-11-07T03:26:21Z2017-11-07T03:26:21ZRather than being free of values, good science is transparent about them<figure><img src="https://images.theconversation.com/files/193459/original/file-20171106-1055-1tmbboh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's good for scientists to work in glass laboratories.</span> <span class="attribution"><a class="source" href="https://www.broadinstitute.org/photos-broad-institute/photos-broad-institute">Len Rubenstein</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Scientists these days face a conundrum. As Americans are buffeted by accounts of <a href="http://www.reuters.com/article/us-usa-trump-media/trump-suggests-challenging-tv-network-licenses-over-fake-news-idUSKBN1CG1WB">fake news</a>, <a href="https://www.seeker.com/alternative-facts-have-plagued-science-for-decades-2272707511.html">alternative facts</a> and <a href="https://www.theguardian.com/us-news/2017/oct/14/russia-us-politics-social-media-facebook">deceptive social media campaigns</a>, how can researchers and their scientific expertise contribute meaningfully to the conversation?</p>
<p>There is a common perception that science is a matter of hard facts and that it <a href="https://ehjournal.biomedcentral.com/articles/10.1186/1476-069X-12-69">can and should remain insulated</a> from the social and political interests that permeate the rest of society. Nevertheless, many historians, philosophers and sociologists who study the practice of science have come to the conclusion that trying to kick values out of science risks throwing the baby out with the bathwater. </p>
<p>Ethical and social values – like the desire to promote economic development, public health or environmental protection – often play integral roles in scientific research. By acknowledging this, scientists might seem to give away their authority as a defense against the flood of misleading, inaccurate information that surrounds us. But I argue in my book “<a href="https://global.oup.com/academic/product/a-tapestry-of-values-9780190260811?lang=en&cc=us">A Tapestry of Values: An Introduction to Values in Science</a>” that if scientists take appropriate steps to manage and communicate about their values, they can promote a more realistic view of science as both value-laden and reliable.</p>
<h2>Values can be good or bad</h2>
<p>There is no question, of course, that values can cause problems in science. Powerful organizations like the <a href="https://www.ucpress.edu/ebook.php?isbn=9780520950436">tobacco</a> and <a href="https://www.ucpress.edu/book.php?isbn=9780520275829">lead</a> industries have manipulated science to boost their profit margins and prevent regulation of their products. The fossil fuel industry has engaged in similar tactics to <a href="http://www.merchantsofdoubt.org/">spread misinformation about climate change</a>.</p>
<p>And it’s not just big business that spreads misleading science – <a href="https://www.healthline.com/health-news/fake-news-plaguing-world-of-science#1">many different groups</a> peddle questionable claims about everything from vaccines and alternative medicines to genetically modified foods and diet strategies. In these cases, economic values or ideological commitments have inclined people to ignore or suppress evidence that runs counter to their preferences.</p>
<p>But I’d argue that it would be a grave mistake to try to eliminate all value considerations from scientific research. At the very least, most people want scientists to respect human rights and animal welfare when they design potentially harmful experiments.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=601&fit=crop&dpr=1 600w, https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=601&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=601&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=755&fit=crop&dpr=1 754w, https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=755&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/193462/original/file-20171106-1027-1qxpql5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=755&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What research gets funded, from a limited pool of money, is a value-laden decision.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/ha96QM1eH74">Andrew Robles on Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>We as citizens also want scientists to keep social priorities in mind when deciding what research projects to undertake. In part, this involves choosing among an array of possible topics – for example, deciding how to divide up medical research investments among cancer, AIDS, diabetes and mental health.</p>
<p>It also involves deciding how scientists study these topics. Should they focus more attention on <a href="https://www.cancer.org/research/we-conduct-cancer-research/epidemiology/cancer-prevention-studies-save-lives.html">preventing environmentally induced cancers</a>? Or treating cancers that are already present? How much money should go toward developing new drugs for treating depression as opposed to studying how to <a href="https://www.mayoclinic.org/diseases-conditions/depression/in-depth/depression-and-exercise/art-20046495">mitigate some cases</a> by modifying diet, exercise or the social environment? Social values are obviously relevant to making these judgments.</p>
<h2>Between hard facts and unfounded advocacy</h2>
<p>A great deal of science is now performed in an effort to inform policymakers who need to make practical decisions about real-world problems such as regulating industrial chemicals or managing wildlife populations or preventing disease outbreaks. This sort of research can be plagued by uncertainties; there’s almost never one clear-cut “right” answer. </p>
<p>In these research contexts, scientists must decide how to extrapolate beyond the available data and weigh complex bodies of evidence in order to <a href="https://www.upress.pitt.edu/BookDetails.aspx?bookId=35967">help policymakers draw conclusions</a>. <a href="https://global.oup.com/academic/product/exploring-inductive-risk-9780190467722?lang=en&cc=us">Values have a role to play</a> in making these decisions. If one errs in one direction, one often risks overregulation and economic losses. Err the other way, and public health and environmental resources are often at stake. It makes sense to think about these consequences when deciding which way to lean.</p>
<p>Even the language employed by scientists is often laden with values. For example, environmental scientists have <a href="https://yalebooks.yale.edu/book/9780300205817/metaphors-environmental-sustainability">debated the merits</a> of talking about “invasive,” “nonnative,” “exotic” or “alien” species, given that these are metaphorical terms that have great significance in contemporary social and political debates. In biomedical research, scientists have <a href="http://science.sciencemag.org/content/351/6273/564">struggled to decide</a> whether the benefits of employing racial categories outweigh the dangers of promoting misleading notions about race as a biological phenomenon. And the World Health Organization suggested in 2015 that scientists should <a href="http://www.sciencemag.org/news/2015/05/discovered-disease-who-has-new-rules-avoiding-offensive-names">stop using disease names</a> like swine flu, athlete’s foot or Marburg disease, because they could stigmatize animals, people or places. In cases like these, there may be no strictly value-neutral ways of categorizing and describing phenomena.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=355&fit=crop&dpr=1 600w, https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=355&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=355&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=446&fit=crop&dpr=1 754w, https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=446&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/193481/original/file-20171106-1011-1lnb5nq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=446&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Broadening the researcher pool beyond just the types who attended an international scientific meeting in 1879 means people are bringing different sets of values to the lab bench.</span>
<span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:II_International_Meteorological_congress_Rome_1879.jpg">Музей-архив Д. И. Менделеева</a></span>
</figcaption>
</figure>
<h2>Recognizing values helps science’s integrity</h2>
<p>Even if we cannot turn science into a value-free endeavor, researchers can still take important steps to preserve its legitimacy. One way to do that is for the scientific community to <a href="https://ehp.niehs.nih.gov/1408107/">promote as much transparency in science as possible</a> so that the influences of values can be recognized. Depending on the context, this can involve many different activities: consistently publishing results, using open-access journals, making data publicly available, providing data analysis plans before studies begin, making materials and methods available to other researchers and disclosing conflicts of interest.</p>
<p>Both citizens and scientists also need to scrutinize and discuss the influences of values as effectively as possible, using many different venues: Journals can promote <a href="http://science.sciencemag.org/content/357/6348/256?ijkey=aoQ8T2TirYWfM&keytype=ref&siteid=sci">thoughtful peer-review processes</a>, government agencies can maintain effective science advisory boards, scientific societies can create reports on debated topics, <a href="https://www.niehs.nih.gov/research/supported/translational/community/index.cfm">citizens can get involved in research projects</a> and the scientific community can encourage new perspectives by <a href="http://www.sciencemag.org/careers/2015/12/moving-toward-inclusion">promoting greater diversity</a> in its membership. By taking these steps, scientists and stakeholders can decide how best to handle important judgments, and they can distinguish scientific conclusions that are well supported from those that are more tenuous.</p>
<p>By virtue of the fact that science is done by and for human beings, values are entangled in the enterprise whether we acknowledge it or not. Rather than <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0186049">dismissing scientists who discuss their values</a>, we ought to encourage scientists and other stakeholders to engage in open, thoughtful reflection about how values influence research. Far from threatening the integrity of science, this is the path to promoting science that is trustworthy and socially responsible.</p><img src="https://counter.theconversation.com/content/84946/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kevin Elliott does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Science isn’t cold, hard facts uncovered by emotionless robots. Acknowledging how and where values play a role promotes a more realistic view and can advance science’s reputation for reliability.Kevin Elliott, Associate Professor in Lyman Briggs College, Fisheries & Wildlife, and Philosophy, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768512017-05-30T01:49:32Z2017-05-30T01:49:32ZResearch transparency: 5 questions about open science answered<figure><img src="https://images.theconversation.com/files/171204/original/file-20170526-6389-1eepgnq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Opening up data and materials helps with research transparency.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/book-wisdom-life-read-magic-background-515241850">REDPIXEL.PL via Shutterstock.com</a></span></figcaption></figure><p><strong>What is “open science”?</strong></p>
<p><a href="https://osf.io/preprints/psyarxiv/ak6jr">Open science</a> is a set of practices designed to make scientific processes and results more transparent and accessible to people outside the research team. It includes making complete research materials, data and lab procedures freely available online to anyone. Many scientists are also proponents of <a href="https://sparcopen.org/open-access/">open access</a>, a parallel movement involving making research articles available to read without a subscription or access fee.</p>
<p><strong>Why are researchers interested in open science? What problems does it aim to address?</strong></p>
<p>Recent research finds that many published scientific findings might not be reliable. For example, researchers have reported being able to replicate <a href="https://elife.elifesciences.org/collections/reproducibility-project-cancer-biology">only 40 percent</a> <a href="https://doi.org/10.1038/nrd3439-c1">or less</a> of <a href="http://www.nature.com/nature/journal/v483/n7391/full/483531a.html">cancer biology results</a>, and a large-scale <a href="https://doi.org/10.1126/science.aac4716">attempt to replicate 100 recent psychology studies</a> successfully reproduced fewer than half of the original results.</p>
<p>This has come to be called a “<a href="https://theconversation.com/we-found-only-one-third-of-published-psychology-research-is-reliable-now-what-46596">reproducibility crisis</a>.” It’s pushed many scientists to look for ways to improve their research practices and increase study reliability. Practicing open science is one way to do so. When scientists share their underlying materials and data, other scientists can more easily evaluate and attempt to replicate them.</p>
<p>Also, open science can help speed scientific discovery. When scientists share their materials and data, others can use and analyze them in new ways, potentially leading to new discoveries. Some journals are specifically dedicated to publishing data sets for reuse (<a href="https://www.nature.com/sdata/">Scientific Data</a>; <a href="http://openpsychologydata.metajnl.com/">Journal of Open Psychology Data</a>). <a href="http://doi.org/10.5334/jopd.ac">A paper in the latter</a> has already been cited 17 times in under three years – nearly all these citations represent new discoveries, sometimes on topics unrelated to the original research.</p>
<p><strong>Wait – open science sounds just like the way I learned in school that science works. How can this be new?</strong></p>
<p>Under the status quo, science is shared through a single vehicle: Researchers publish journal articles summarizing their studies’ methods and results. The key word here is summary; to write a clear and succinct article, important details may be omitted. Journal articles are vetted via the peer review process, in which an editor and a few experts assess them for quality before publication. But – perhaps surprisingly – the primary data and materials underlying the article are almost never reviewed. </p>
<p>Historically, this made some sense because journal pages were limited, and storing and sharing materials and data were difficult. But with computers and the internet, it’s much easier to practice open science. It’s now feasible to store large quantities of information on personal computers, and <a href="https://www.nature.com/sdata/policies/repositories">online repositories to share study materials and data</a> are becoming more common. Recently, some journals have even begun to <a href="http://journals.plos.org/plosone/s/data-availability">require</a> or <a href="https://osf.io/tvyxz/wiki/5.%20Adoptions%20and%20Endorsements/">reward</a> <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456">open science practices</a> like publicly posting materials and data.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Open science makes sharing data the default.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/client-passing-documentation-binders-his-partner-330663044">Bacho via Shutterstock.com</a></span>
</figcaption>
</figure>
<p>There are still some difficulties sharing extremely large data sets and physical materials (such as the specific liquid solutions a chemist might use), and some scientists might have good reasons to keep some information private (for instance, trade secrets or study participants’ personal information). But as time passes, more and more scientists will likely practice open science. And, in turn, science will improve.</p>
<p>Some do view the open science movement as a return to science’s core values. Most researchers over time have <a href="https://doi.org/10.1525/jer.2007.2.4.3">valued transparency</a> as a key ingredient in evaluating the truth of a claim. Now with technology’s help it is much easier to share everything.</p>
<p><strong>Why isn’t open science the default? What incentives work against open science practices?</strong></p>
<p>Two major forces work against adoption of open science practices: habits and reward structures. First, most established researchers have been practicing closed science for years, even decades, and changing these old habits requires some upfront time and effort. <a href="https://osf.io">Technology</a> is helping speed this process of adopting open habits, but behavioral change is hard. </p>
<p>Second, scientists, like other humans, tend to repeat behaviors that are rewarded and avoid those that are punished. Journal editors have tended to favor publishing papers that tell a tidy story with perfectly clear results. This has led researchers to craft their papers to be free from blemish, omitting “failed” studies that don’t clearly support their theories. But real data are often messy, so being fully transparent can open up researchers to critique. </p>
<p>Additionally, some researchers are afraid of being “scooped” – they worry someone will steal their idea and publish first. Or they fear that others will <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">unfairly benefit</a> from using shared data or materials without putting in as much effort. </p>
<p>Taken together, some researchers worry they will be punished for their openness and are skeptical that the perceived increase in workload that comes with adopting open science habits is needed and worthwhile. We believe scientists must continue to <a href="https://osf.io/tvyxz/">develop systems</a> to <a href="http://www.ourdigitalmags.com/publication/?i=365522&article_id=2657445&view=articleBrowser&ver=html5#%7B%22issue_id%22:365522,%22view%22:%22articleBrowser%22,%22article_id%22:%222657445%22%7D">allay fears</a> and reward openness. </p>
<p><strong>I’m not a scientist; why should I care?</strong></p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=466&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=466&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=466&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=585&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=585&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=585&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Open access is the cousin to open science – the idea is that research should be freely available to all, not hidden behind paywalls.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/34070876@N08/3602393341">h_pampel</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Science benefits everyone. If you’re reading this article now on a computer, or have ever benefited from an antibiotic, or kicked a bad habit following a psychologist’s advice, then you are a consumer of science. Open science (and its cousin, open access) means that anyone – including teachers, policymakers, journalists and other nonscientists – can access and evaluate study information.</p>
<p>Considering automatic enrollment in a 401k at work or whether to have that elective screening procedure at the doctor? Want to ensure your tax dollars are spent on policies and programs that actually work? Access to high-quality research evidence matters to you. Open materials and open data facilitate reuse of scientific products, increasing the value of every tax dollar invested. Improving science’s reliability and speed benefits us all.</p><img src="https://counter.theconversation.com/content/76851/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Elizabeth Gilbert supports the Society for the Improvement of Psychological Science and has published on replication efforts as part of the Open Science Collaboration. Along with Katherine Corker and Barbara Spellman, she has a chapter called "Open Science: What, why, how" forthcoming in the Stevens Handbook of Experimental Psychology and Cognitive Neuroscience.</span></em></p><p class="fine-print"><em><span>Katie Corker is on the executive board for the Society for the Improvement of Psychological Science (improvingpsych.org) and an ambassador for the Center for Open Science (cos.io). She is also an editorial board member for Scientific Data. All of these roles are pro bono.</span></em></p>Partly in response to the so-called ‘reproducibility crisis’ in science, researchers are embracing a set of practices that aim to make the whole endeavor more transparent, more reliable – and better.Elizabeth Gilbert, Postdoctoral Research Fellow in Psychiatry and Behavioral Sciences, Medical University of South CarolinaKatie Corker, Assistant Professor of Psychology, Grand Valley State University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768482017-05-08T00:53:28Z2017-05-08T00:53:28ZPeople don’t trust scientific research when companies are involved<figure><img src="https://images.theconversation.com/files/168205/original/file-20170507-19132-lu45yn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">People seem to think industry-funded research belongs in the garbage.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hiv-testing-laboratory-singleuse-plastic-syringes-506118349">mllejules/Shutterstock.com</a></span></figcaption></figure><p>A soda company sponsoring <a href="https://well.blogs.nytimes.com/2015/08/09/coca-cola-funds-scientists-who-shift-blame-for-obesity-away-from-bad-diets/">nutrition research</a>. An oil conglomerate <a href="https://insideclimatenews.org/news/26052016/agu-american-geophysical-union-exxon-climate-change-denial-science-sponsorship">helping fund a climate-related research meeting</a>. Does the public care who’s paying for science?</p>
<p>In a word, yes. When industry funds science, credibility suffers. And this does not bode well for the types of public-private research partnerships that appear to be becoming <a href="http://www.rdmag.com/article/2015/04/how-academic-institutions-partner-private-industry">more prevalent</a> as <a href="https://www.nsf.gov/statistics/2016/nsb20161/#/report/chapter-4/recent-trends-in-u-s-r-d-performance">government funding for research and development lags</a>. </p>
<p>The recurring topic of conflict of interest has made headlines in recent weeks. The National Academies of Science, Engineering, and Medicine has <a href="http://www.the-scientist.com/?articles.view/articleNo/49331/title/National-Academies-Revise-Conflict-of-Interest-Policy/">revised its conflict of interest guidelines</a> following <a href="https://doi.org/10.1371/journal.pone.0172317">questions about whether members</a> of a recent expert panel on GMOs had industry ties or other financial conflicts that were not disclosed in the panel’s final report.</p>
<p><a href="https://doi.org/10.1371/journal.pone.0175643">Our own recent research</a> speaks to how hard it may be for the public to see research as useful when produced with an industry partner, even when that company is just one of several collaborators.</p>
<h2>What people think of funding sources</h2>
<p>We asked our study volunteers what they thought about a proposed research partnership to study the potential risks related to either genetically modified foods or trans fats.</p>
<p>We randomly assigned participants to each evaluate one of 15 different research partnership arrangements – various combinations of scientists from a university, a government agency, a nongovernmental organization and a large food company.</p>
<p>For example, 1/15th of participants were asked to consider a research collaboration that included only university researchers. Another 1/15th of participants considered a research partnership that included both university and government scientists, and so on. In total we presented four conditions where there was a single type of researcher, another six collaborations with two partners, four with three partners and one with all four partners. </p>
<p><iframe id="O9jF8" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/O9jF8/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>When a research team included an industry partner, our participants were generally less likely to think the scientists would consider a full range of evidence and listen to different voices. An industry partner also reduced how much participants believed any resulting data would provide meaningful guidance for making decisions.</p>
<p>At the outset of our work, we thought including a diverse array of partners in a research collaboration might mitigate the negative perceptions that come with industry involvement. But, while including scientists from a nonindustry organization (particularly a nongovernmental organization) made some difference, the effect was small. Adding a government partner provided no substantive additional benefit.</p>
<p>When we asked participants to describe what they thought about the research partnership in their own words, they were skeptical whether an industry partner could ever be trusted to release information that might hurt its profits.</p>
<p>Our results may be even more troubling because we chose a company with a good reputation. We used pretests to select particular examples – of a corporation, as well as a university, government agency and nongovernmental organization – that had relatively high positive ratings and relatively low negative ratings in a test sample.</p>
<h2>Can industry do valid science?</h2>
<p>You don’t have to look far for real-life examples of poorly conducted or intentionally misleading industry research. The <a href="https://www.justice.gov/opa/pr/glaxosmithkline-plead-guilty-and-pay-3-billion-resolve-fraud-allegations-and-failure-report">pharmaceutical</a>, <a href="https://www.theguardian.com/environment/2016/sep/22/pesticide-manufacturers-own-tests-reveal-serious-harm-to-honeybees">chemical</a>, <a href="https://www.nytimes.com/2016/09/13/well/eat/how-the-sugar-industry-shifted-blame-to-fat.html">nutrition</a> and <a href="https://www.theguardian.com/environment/2015/mar/25/fossil-fuel-firms-are-still-bankrolling-climate-denial-lobby-groups">petroleum</a> industries have all weathered criticism of their research integrity, and for good reason. These ethically questionable episodes no doubt fuel public skepticism of industry research. Stories of pharmaceutical companies conducting <a href="https://doi.org/10.1371/journal.pmed.0020138">less than rigorous clinical trials</a> for the benefit of their marketing departments, or the tobacco industry steadfastly denying the connection between smoking and cancer in the face of mounting evidence, help explain public concern about industry-funded science. </p>
<p>But industry generally has a long and impressive history of supporting scientific research and technical development. Industry-supported research has <a href="https://www.wired.com/2012/09/ff-corning-gorilla-glass/">generated widely adopted technologies</a>, <a href="http://www.economist.com/technology-quarterly/2016-03-12/after-moores-law">driven the evolution of entire economic sectors</a>, <a href="http://articles.latimes.com/2012/oct/20/local/la-me-stanford-ovshinsky-20121021">improved processes that were harmful to public health and the environment</a> and <a href="https://www.bell-labs.com/our-people/recognition/">won Nobel Prizes</a>. And as scientists not currently affiliated with industry scramble to fund their research in an era of tight budgets, big companies have money to underwrite science.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/168152/original/file-20170505-19116-145xhb5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Does it matter within what kind of institution a researcher hangs her lab coat?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/biologycourses/7006382260">Vivien Rolfe</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Can this lack of trust be overcome? Moving forward, it will be essential to address incentives such as short-term profit or individual recognition that can encourage poor research – in any institutional context. By showing how quickly people may judge industry-funded research, our work indicates that it’s critical to think about how the results of that research can be communicated effectively. </p>
<p>Our results should worry those who want research to be evaluated largely on its scientific merits, rather than based upon the affiliations of those involved. </p>
<p>Although relatively little previous scholarship has investigated this topic, we expected to find that including multiple, nonindustry organizations in a scientific partnership might, at least partly, assuage participants’ concerns about industry involvement. This reflects our initial tentative belief that, given the resources and expertise within industry, there must be some way to create public-private partnerships that produce high-quality research which is perceived widely as such.</p>
<p><a href="http://msutoday.msu.edu/news/2017/public-skeptical-of-research-if-tied-to-a-company/">Our interdisciplinary team</a> – a risk communication scholar, a sociologist, a philosopher of science, a historian of science and a toxicologist – is also examining philosophical arguments and historical precedents for guidance on these issues.</p>
<p>Philosophy can tell us a great deal about how the values of investigators <a href="https://global.oup.com/academic/product/a-tapestry-of-values-9780190260811?lang=en&cc=us">can influence their results</a>. And history shows that not so long ago, up until a few decades after World War II, many considered industry support <a href="http://physicstoday.scitation.org/doi/10.1063/PT.3.3081">a way to uphold research integrity</a> by protecting it from government secrecy regimes.</p>
<p>Looking forward, we are planning additional social scientific experiments to examine how specific procedures that research partnerships sometimes use may affect public views about collaborations with industry partners. For example, perhaps open-data policies, transparency initiatives or external reviewer processes may alleviate bias concerns.</p>
<p>Given the central role that industry plays in scientific research and development, it is important to explore strategies for designing multi-sector research collaborations that can generate legitimate, high-quality results while being perceived as legitimate by the public.</p><img src="https://counter.theconversation.com/content/76848/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Joseph D. Martin receives funding from the National Science Foundation.</span></em></p><p class="fine-print"><em><span>Aaron M. McCright, John C. Besley, Kevin Elliott, and Nagwan Zahry do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Scientists need funding to do their work. But a new study finds turning to industry partners taints perceptions of university research, and including other kinds of partners doesn’t really help.John C. Besley, Associate Professor of Advertising and Public Relations, Michigan State UniversityAaron M. McCright, Associate Professor of Sociology, Michigan State UniversityJoseph D. Martin, Fellow-in-Residence at the Consortium for History of Science, Technology, and Medicine and Visiting Research Fellow at the Centre for History and Philosophy of Science, University of LeedsKevin Elliott, Associate Professor of Fisheries & Wildlife and Philosophy, Michigan State UniversityNagwan Zahry, PhD Student in Media and Information Studies, Michigan State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/741982017-03-15T09:49:46Z2017-03-15T09:49:46ZThe science ‘reproducibility crisis’ – and what can be done about it<figure><img src="https://images.theconversation.com/files/160518/original/image-20170313-9641-122u3y0.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Science and integrity is under the microscope.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/scientist-equipment-science-experiments-laboratory-glassware-450321097?src=itcZC06Q24dlVK03nfk82Q-1-1">Shutterstock</a></span></figcaption></figure><p>Reproducibility is the idea that an experiment can be repeated by another scientist and they will get the same result. It is important to show that the claims of any experiment are true and for them to be useful for any further research. </p>
<p>However, science appears to have an issue with reproducibility. A survey by Nature revealed that <a href="http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970">52% of researchers</a> believed there was a “significant reproducibility crisis” and 38% said there was a “slight crisis”.</p>
<p>We asked three experts how they think the situation could be improved.</p>
<h2>Open Research is the answer</h2>
<p><em>Danny Kingsley, head of the Office of Scholarly Communication, University of Cambridge</em></p>
<p>The solution to the scientific reproducibility crisis is to move towards <a href="http://osc.cam.ac.uk/open-research">Open Research</a> – the idea that scientific knowledge of all kinds should be openly shared as early as it is practical in the discovery process. We need to reward the publication of research outputs along the entire process, rather than just each journal article as it is published. </p>
<p>As well as other research outputs – such as data sets – we should reward research productivity itself as well as the thought process and planning behind the study. This is why <a href="http://neurochambers.blogspot.co.uk/2013/04/scientific-publishing-as-it-was-meant_10.html">Registered Reports</a> was launched in 2013, where researchers register the proposal and how the research will be conducted, before any experimental work commences. It allows editorial decisions to be based on the rigour of the experimental design and increases the likelihood that the findings could be replicated. </p>
<p>In the UK there is now a requirement from most <a href="http://www.data.cam.ac.uk/funders">funders</a> that the data underpinning a research publication is made available. However, although there are moves towards open research, many argue against the sharing of data among the research community. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=425&fit=crop&dpr=1 754w, https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=425&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/160520/original/image-20170313-9613-2cfmqw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=425&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Questionable findings are often hidden.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/product-researching-marketing-team-work-loft-425326300?src=WZtYxmdFeSANhTM2RN1K6w-2-98">Shutterstock</a></span>
</figcaption>
</figure>
<p>Researchers often write multiple papers from a single data set and many fear that if this data is released with the first publication then the researcher will be “scooped” by another research group, who will publish findings from similar data sets before the original authors get the chance to publish follow up articles – to gain maximum credit for the work. If the publication of data itself could be recorded as a “research output”, then being scooped would no longer be such an issue, as such credit will have been given.</p>
<p><a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0026828">One benefit of sharing data</a> could be an improvement in its quality – as previous research has shown. And there have been small steps towards this goal, such as a <a href="https://www.force11.org/group/joint-declaration-data-citation-principles-final">standard method of citing data</a>. </p>
<p>We also need to publish “null” results – those that do not support the hypothesis – to prevent other researchers wasting time repeating work. There are a few publication outlets for this, and a <a href="https://techcrunch.com/2017/02/28/researchgate-raises-52-6m-for-its-social-research-network-for-scientists/">recent press release from ResearchGate</a> indicated that it supports the sharing of failed experiments through its “project” offering. It lets users upload and track experiments as they are happening – meaning no one knows how they will turn out.</p>
<h2>Psychology is leading the way out of crisis</h2>
<p><em>Jim Grange, senior lecturer in psychology, Keele University</em></p>
<p>To me, it is clear that there is a reproducibility crisis in psychological science, and across all sciences. Murmurings of low reproducibility began in 2011 – the “<a href="http://ejwagenmakers.com/2012/Wagenmakers2012Horrors.pdf">year of horrors</a>” for psychology – with a high profile fraud case. But since then, <a href="https://osf.io/vmrgu/">The Open Science Collaboration</a> has published the findings of a large-scale effort to closely replicate 100 studies in psychology. Only 36% of them could be replicated. </p>
<p>The <a href="https://arxiv.org/abs/1205.4251">incentive structures</a> in universities and the attitude that you “publish or perish” means that researchers prioritise “getting it published” over “getting it right”. It also means that some, implicitly or explicitly, use questionable research practices to achieve publication. These may include failing to report parts of data sets or trying different analytical approaches to make the data fit what you want to say. It could also mean presenting exploratory research as though it was originally confirmatory (designed to test a specific hypothesis).</p>
<p>However, many psychology journals now recommend or require the preregistration of studies which <a href="http://www.apa.org/science/about/psa/2015/08/pre-registration.aspx">allow researchers to detail their predictions</a>, experimental protocols, and planned analytical strategy before data collection. This provides confidence to readers that no questionable research practices have occurred.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/160499/original/image-20170313-19247-57184o.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Erasing data: a questionable research practice.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/erasing-data-correction-fluid-427863787?src=1XEqIKb5SpySP5ZUCpLmZg-1-69">Shutterstock</a></span>
</figcaption>
</figure>
<p><a href="https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/registered-reports-a-step-change-in-scientific-publishing">Registered Reports</a> has taken this further. But of course, once results are produced, isolated findings don’t mean much until they have been replicated. </p>
<p>I make efforts to replicate results before trying to publish and you’d be forgiven for thinking that replication attempts are common in science, but this is simply not the case. Journals seek novel theories and findings, and view replications as treading over old ground which offers little incentive for career-minded academics to conduct replications.</p>
<p>This has also led to the introduction of <a href="https://www.psychologicalscience.org/publications/replication">Registered Replication Reports</a> in <a href="http://journals.sagepub.com/home/pps">Perspectives on Psychological Science</a>. This is where teams of researchers each follow identical procedures independently and aim to replicate important findings from the literature. A single paper then collates and analyses them to establish the size and reproducibility of the original study.</p>
<p>Although psychology is leading the way for improvements with these pioneering initiatives, it is certainly not out of the woods. But it has started to move beyond a crisis and make impressive strides – more disciplines need to follow suit.</p>
<h2>This is a publication bias crisis</h2>
<p><em>Ottoline Leyser, director of the Sainsbury Laboratory, University of Cambridge</em></p>
<p>Reproducibility is a fundamental building block of science. If two people do the same experiment, they should get the same result. But there are many good reasons why two “identical” experiments might not give the same result such as unknown differences that have not been considered – and some <a href="http://www.plantcell.org/content/2/4/279.abstract">exciting discoveries have been made this way</a>.</p>
<p>So if a lack of reproducibility is itself not necessarily a problem, why is everybody talking about a crisis? In some cases poor practice and corner cutting have contributed to lack of reproducibility, and there have been some <a href="http://www.sciencemag.org/news/2012/11/final-report-stapel-affair-points-bigger-problems-social-psychology">high profile cases of out and out fraud</a>. It’s a major concern, but what is causing it?</p>
<p>In 2014 I chaired a project on the research culture in Britain for the <a href="http://nuffieldbioethics.org/project/research-culture/">Nuffield Council on bioethics</a>, which was motivated by <a href="http://theconversation.com/the-dark-side-of-research-when-chasing-prestige-becomes-the-prize-35001">concerns about research integrity</a> including over-claiming, rushing prematurely to publication and incorrect use of statistics. The main conclusions were that poor practice is incentivised by hyper-competition with overly narrow rules for winning.</p>
<p>There is an excessive focus on the publication of groundbreaking results in prestigious journals. But science cannot only be groundbreaking, as there is a lot of important digging to do after new discoveries – but there is not enough credit in the system for this work and it may remain unpublished because researchers prioritise their time on the eye-catching papers, hurriedly put together. </p>
<p>The reproducibility crisis is actually a publication bias crisis which is driven by the reward structures in the research system. Various approaches have been suggested to address problems, such as pre-registration of experiments. However, the research landscape is highly diverse and this type of solution is only sensible for some research types. The most widely relevant solution is to change the reward structures. In the UK there is a major opportunity to do this by reforming the <a href="https://theconversation.com/qanda-what-is-the-ref-and-how-is-the-quality-of-university-research-measured-35529">Research Excellence Framework</a> (REF). Through the REF, public money is allocated to universities based on the “quality” of the four best research outputs, usually papers, produced by each of their principal investigators over approximately six years and it disproportionately rewards groundbreaking research. </p>
<p>We need reward for a portfolio of research outputs, including not only the headline grabbing results, but also confirmatory work and community data sharing, which are the hallmarks of a truly high quality research endeavour. This would go a long way to shifting the current destructive culture.</p><img src="https://counter.theconversation.com/content/74198/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ottoline Leyser receives research funding from The Gatsby Charitable Foundation, The European Research Council and the Biotechnology and Biological Sciences Research Council. She is affiliated with the University of Cambridge and the Royal Society. She is a former member of the Nuffield Council on Bioethics.</span></em></p><p class="fine-print"><em><span>Danny Kingsley is a member of the Research Councils UK Open Access Practitioners Group and on the Steering Committee of the UK Scholarly Communication Licence. She is an active member of FORCE11.</span></em></p><p class="fine-print"><em><span>Jim Grange does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We asked three experts for their takes.Ottoline Leyser, Director of the Sainsbury Laboratory, University of CambridgeDanny Kingsley, Head, Office of Scholarly Communication, University of Cambridge, University of CambridgeJim Grange, Senior Lecturer in psychology, Keele UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/726692017-02-26T16:59:05Z2017-02-26T16:59:05ZThe peer-review system for academic papers is badly in need of repair<figure><img src="https://images.theconversation.com/files/156762/original/image-20170214-25992-15ckbwa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The scientific refereeing process can be tedious, time-consuming and isn't very rewarding.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Peer review, or scientific refereeing, is the basis of the academic process. It’s a rigorous evaluation that aims to ensure only work which advances knowledge is published in a scientific journal. Scientists must be able to trust this system: if they see that something is peer reviewed, it should be a hallmark of quality.</p>
<p>When the editor of scientific journal receives a manuscript, they ask other another scientist – a specialist in their field – to review it. The referee is required to advise the editor whether the manuscript should be published and to give <a href="https://theconversation.com/how-plugging-into-well-connected-colleagues-can-help-research-fly-71223">feedback</a> to the authors.</p>
<p>The system is not flawless. There have been instances of <a href="http://blogs.lse.ac.uk/impactofsocialsciences/2016/12/13/manipulating-the-peer-review-process-why-it-happens-and-how-it-might-be-prevented/">fraud and manipulation</a> due to refereeing, but these are – we hope – isolated cases. </p>
<p>But there are much bigger systemic problems associated with peer review. These are negatively affecting scientific credibility. These include the fact that, globally, it is hard to find referees: reviewing a manuscript requires a lot of time and minimal reward. Very few journals pay referees, and most academics who act as referees are doing so for free in their spare time.</p>
<p>On top of this those who do act as referees often struggle to deliver on time. Worse still, their reports are not always helpful to editors or authors. </p>
<p>Some journals work actively to tackle these issues, but more can be done to ensure that the scientific refereeing system retains its integrity.</p>
<h2>The challenges</h2>
<p>Journal editors are frustrated about the dearth of referees. In an <a href="https://hub.wiley.com/community/exchanges/discover/blog/2015/01/07/recognition-for-peer-review-and-editing-in-australia-and-beyond">open letter</a> to the scientific community, a group of editors wrote that, despite:</p>
<blockquote>
<p>… so much weight [being] given to peer-reviewed publication the essential “backroom” tasks of editing journals and reviewing articles are rarely acknowledged as aspects of academic performance.</p>
</blockquote>
<p>No wonder they’re worried: more than <a href="http://www.informationr.net/ir/14-1/paper391.html">1 million research articles</a> are published globally each year. That requires a lot of referees. But finding appropriate referees is just one part of the bigger task facing editors.</p>
<p>Editors have to get referees to stick to the agreed deadlines. That’s not easy: people tend not to prioritise their review tasks since time spent on their own research is more rewarding.</p>
<p>An experiment conducted with the Journal of Public Economics based in Cambridge in the US found that its referees are late with their reports <a href="http://pubs.aeaweb.org/doi/10.1257/jep.28.3.169">half of the time</a>. There are also instances, across journals, of referees simply never delivering even though they’ve promised to do so.</p>
<p>In some disciplines, these problems have given rise to a serious publication lag – the time between when the manuscript arrives to the actual publication. Over the past 30 years this lag has nearly <a href="http://www.journals.uchicago.edu/doi/10.1086/341868">tripled</a> in economics, from 11 months to just under 30 months. </p>
<p>It not only takes longer to disseminate ideas. The publication lag also worsens the prospects of <a href="http://voxeu.org/article/publication-lags-and-young-economists-research-output">young scientists</a> who need publications to be hired.</p>
<p>Another problem with the existing system is that referee reports do not always adequately inform the editor nor really suggest ways of fundamentally improving the article.</p>
<p>It’s not just authors who complain about this: <a href="http://www.acrwebsite.org/search/view-conference-proceedings.aspx?Id=8104">journal editors</a> do too. One explanation is that referees may follow their own interests, which are not necessarily those of the editor nor the author.</p>
<p>All too often they try to impress editors by making blemishes look like flaws. Economists call this problem “<a href="https://academic.oup.com/rfs/article/28/3/637/1577216/Editorial-Cosmetic-Surgery-in-the-Academic-Review">signal jamming</a>”. At worst it may turn down innovative research.</p>
<h2>Possible changes</h2>
<p>The good news is that journals are aware of these problems, and are committed to tackling them.</p>
<p>Journals should develop and nurture a large base of potential referees, constantly adding new ones and retaining old ones. And these referees need proper recognition. This could involve simply thanking referees publicly, or perhaps awarding prizes for good refereeing.</p>
<p>Journals should also consider paying referees. The estimated value of unpaid referee time is as much as <a href="https://www.timeshighereducation.com/news/unpaid-peer-review-is-worth-19bn/402189.article">£1.9 billion a year</a> – it is clearly a service that requires some financial reward.</p>
<p>Small changes help, too. <a href="http://voxeu.org/article/lessons-experiment-referees-journal-public-economics">Shorter deadlines</a> reduce turnaround time work referees often just submit before the deadline. A public list of referees’ turnaround encourages them to stay on time, too.</p>
<p>Editors should also <a href="http://rfssfs.org/files/2015/01/Joint-Editorial-Advice-for-Authors-2002.pdf">reject</a> <a href="https://academic.oup.com/rfs/article/26/11/2685/1613905/Joint-Editorial">articles</a> that are too sloppy, rather than letting a referee improve them.</p>
<p>Editors should also engage in “<a href="https://academic.oup.com/rfs/article/28/3/637/1577216/Editorial-Cosmetic-Surgery-in-the-Academic-Review">active editing</a>”, instructing the author to ignore referee requests that are merely asking them to fix blemishes.</p>
<p>Editors should also <a href="https://academic.oup.com/rfs/article/25/5/1331/1569914/Reviewing-Less-Progressing-More">pare down</a> the demands on referees, perhaps by asking them to <a href="http://pubs.aeaweb.org/doi/10.1257/jep.31.1.231">separate</a> necessities from suggestions. The guiding principle should be that the work is the author’s – not the referee’s.</p>
<h2>New approaches are being tested</h2>
<p>Journals are already testing new approaches. For instance, some require their editors to <a href="http://revfin.org/new-referee-awards-and-referee-database/">judge the quality</a> of a referee to weed out those people who are simply unhelpful. </p>
<p>Elsevier, a major publisher, has launched a <a href="https://www.reviewerrecognition.elsevier.com/">platform</a> which publicly lists referees and how often they have written referee reports. A similar, independent platform is <a href="https://publons.com/home/">Publons</a>.</p>
<p>“Open peer review” is also growing in popularity. Traditionally, reviewers remain anonymous to guarantee an unbiased opinion. Open peer review goes the opposite way: the referee’s name and report are published together with the article. Everyone can see who the referee was, which is meant to encourage transparency. Not everyone is <a href="http://www.nature.com/nature/peerreview/debate/">convinced</a> about this approach.</p>
<p>Another option is post-publication peer review, in which articles are open for comments all the time from anyone. Sadly, <a href="http://blogs.lse.ac.uk/impactofsocialsciences/2014/11/07/controversy-of-post-publication-peer-review/">internet trolls</a> have tainted this process for many scientists.</p>
<p>It is encouraging that the problems of peer review are being debated and that new approaches are being tested. The peer-review process is very important and its challenges must be taken seriously if academics are to keep publishing quality articles that disseminate new ideas.</p><img src="https://counter.theconversation.com/content/72669/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There are major systemic problems associated with peer review that are negatively affecting scientific credibility.Michael E. Rose, PhD Candidate in Economics, University of Cape TownWillem H. Boshoff, Associate Professor of Economics, Stellenbosch UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/234852014-02-20T14:55:16Z2014-02-20T14:55:16ZStudy shows concern over use of social media data in research<figure><img src="https://images.theconversation.com/files/42081/original/4dztk44b-1392901115.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">"Having a really hard time at the office today." What can you believe in social media?</span> <span class="attribution"><a class="source" href="http://www.flickr.com/photos/apuch/8397641224/sizes/o/">Apuch</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>For researchers, social media is a veritable goldmine of opportunity. Every day, reams and reams of naturally occurring data are produced by users of Twitter, Instagram and Facebook. At any given moment, millions of people are tweeting about what they are doing, where they are going and how they are feeling. And new field of research is emerging that uses this information to investigate all kinds of issues.</p>
<p>Social media data has been used on projects looking at the <a href="http://bit.ly/1d4g3kl">English riots of 2011</a> and numerous other <a href="http://www.nytimes.com/2007/12/17/style/17facebook.html?pagewanted=1&ei=5124&en=33ca15953318a6f5&ex=1355720400&partner=facebook&exprod=facebook&_r=0">sociological studies</a>.</p>
<p>But the people producing the data used in projects like these may not think about the fact that the information they post is valuable for research and might not even know if it is already being used. It’s an ethical problem that is yet to be resolved.</p>
<p>A new <a href="http://www.natcen.ac.uk/media/282288/p0639-research-using-social-media-report-final-190214.pdf">report</a> has found that social media users have mixed feelings about their tweets and posts being used in sociological studies. They were particularly concerned about just how reliable information like this is if we want to learn about people. This is just one of the hurdles researchers need to jump if this type of work is to be accepted into the mainstream, beyond academia. Can you believe what people say on Facebook? Can you draw meaningful conclusions from it?</p>
<p>The study involved 34 people, all of whom used social networking sites. They were asked their opinions on a range of issues and were presented with potential uses for social media data in research. This included using looking at Tweets to gauge opinion on the Olympics or looking at LinkedIn and Facebook profiles to see whether people describe themselves differently when socialising and networking professionally online.</p>
<p>It’s a small sample but there is no other research in this area to date. Other projects are in the pipeline too.</p>
<h2>A question of consent</h2>
<p>Social media sites are notorious for regularly changing their default privacy settings. Even when they stick to the same settings for a long time, they are often complex and difficult to navigate. We uncovered a patchy understanding of these settings that should be food for thought for any researcher thinking about scraping data from the sites.</p>
<p>Although users had concerns about their privacy, many had not tried to change their settings and said they felt ill-informed about the information others could access about them. This should remind researchers that they must not assume that social media users even know their information has been publicly posted and that they should consider these issues when designing a project.</p>
<p>Some respondents felt that the internet is a public space anyway and that users should only post information online if they are happy for others to see it. But this was a minority view. More people thought that researchers should seek consent before using social media information.</p>
<p>Some participants felt consent should always be granted, while others felt it could be waived if the research was for the greater good, such as investigating domestic violence, for example.</p>
<p>There was a stronger consensus on anonymity, with most participants believing that failing to anonymise social media information could potentially harm reputations or compromise security for the people posting the information.</p>
<p>This poses a problem for researchers though. It’s often easy to trace quotes back to a Twitter account even if a user name has not been given. Researchers should take steps to ensure anonymity such as by paraphrasing text and never including twitter handles. But that is not always easy without losing the sense of what is being said. Tweets are, after all, already only 142 characters long.</p>
<h2>Can you believe what you read?</h2>
<p>But ethics and consent didn’t appear to be the most pressing issues for those who questioned the role of social media in research. What bothered them most was the extent to which we can actually rely on the information on these sites when we draw conclusions about people or cultures.</p>
<p>From personal experience, participants said their posts are sometimes incomplete, exaggerated or false. They were concerned that taking a single post or picture without context could give an inaccurate impression of what they think or how they live.</p>
<p>There was also the feeling that the internet provided a veil of anonymity that allowed some users to feel more comfortable posting extreme or exaggerated views. This has real implications for researchers. If how we behave online is not the same as how we behave offline, how useful are these sites as data sources?</p>
<p>These are fundamental issues as social media research tries to gain credibility outside academia and those carrying it out face extra pressure to be transparent about what they’re doing.</p>
<p>Researchers using social media data in their work are certainly aware of these issues. They know that the ethical principles that guide more traditional types of investigation still apply here but that they need to be flexible when applying them to sources that are still evolving. To gain wider acceptance, they need to consider how to better articulate the nature, value and rigour of their research, particularly if using social media as a source.</p><img src="https://counter.theconversation.com/content/23485/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alexandra Fry does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>For researchers, social media is a veritable goldmine of opportunity. Every day, reams and reams of naturally occurring data are produced by users of Twitter, Instagram and Facebook. At any given moment…Alexandra Fry, Researcher, children and young people, National Centre for Social ResearchLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/182712013-09-26T04:30:43Z2013-09-26T04:30:43ZThink alcohol and energy drinks are nothing to worry about? Think again<figure><img src="https://images.theconversation.com/files/31940/original/8smckjsq-1380150792.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Drinkers who consume energy drinks record higher breath alcohol concentrations than those who don't.</span> <span class="attribution"><span class="source">Flickr/thewhitestdogalive</span></span></figcaption></figure><p>Heavy drinkers are mixing alcohol with energy drinks to enable them to drink longer and get more drunk. While the trend is concerning many public health researchers – because the risks remain unknown – others are attempting to allay these fears, claiming there’s nothing to worry about. </p>
<p>Late on Friday and Saturday nights (or, more accurately, early on Saturday and Sunday mornings), <a href="http://www.ndlerf.gov.au/pub/Mono_46_summary.pdf">around 40% of people on Australian city streets</a> are heavily intoxicated, with breath alcohol concentrations (BAC) greater than 0.087. Nearly a quarter of these drinkers have consumed more than two energy drinks. </p>
<p>We don’t have reliable data on use in other countries but use abroad is high. Around <a href="http://www.nutritionj.com/content/6/1/35">three-quarters</a> of college students in the United States and <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1530-0277.2007.00464.x/abstract%5D">85%</a> of Italian students report consuming alcohol energy drinks in the past month. </p>
<p>Our <a href="http://www.ndlerf.gov.au/pub/Mono_46_summary.pdf">research</a>, and that of others around the world, has shown that drinkers who consume energy drinks record higher breath alcohol concentrations than those who don’t. They’re also <a href="http://www.sciencedirect.com/science/article/pii/S0306460309003104">more likely to report</a> engaging in aggressive acts; being injured; having driven while drunk or been the passenger of a drunk driver; and having taken sexual advantage of, or been taken advantage of by, another person. </p>
<p>But these studies don’t tell whether the energy drinks are the culprit, whether people who are more likely to engage in these behaviours are more likely to use energy drinks, or perhaps most likely, some combination of the two.</p>
<p>Normally, experimental research is able to give us some answers. But ethics committees are extremely reluctant to allow researchers to reproduce in the laboratory the levels of alcohol intoxication and energy drink use we see on our streets. </p>
<p>Therefore, much of the laboratory research has, for ethical reasons, been confined to studying the effects of combining lower-levels of alcohol intoxication (BAC under 0.08) with a single energy drink. These doses equate to a coffee and a few beers, far below the levels of consumption that raise public health concerns.</p>
<p>Some of the researchers doing these studies <a href="http://onlinelibrary.wiley.com/doi/10.1111/dar.2012.31.issue-s1/issuetoc">have argued</a> that we shouldn’t be concerned about the risks of combining alcohol and energy drinks. Many of those who draw this reassuring conclusion have been <a href="http://www.bmj.com/content/347/bmj.f5345">funded by</a> one of the major energy drink producers, Red Bull.</p>
<p>The industry-friendly conclusions from the laboratory studies are undeniably correct about alcohol energy drinks when consumption is limited to a single energy drink and alcohol use has been limited that defined as still safe to drive. But for researchers interested in night-time violence, studies which look at people under 0.08 are largely irrelevant. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=500&fit=crop&dpr=1 754w, https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=500&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/31927/original/rwvg2wr7-1380088616.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=500&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Around 40% of people on Australian city streets at 4am have a BAC greater than .087.</span>
<span class="attribution"><span class="source">Plashing Vole</span></span>
</figcaption>
</figure>
<p>So it’s concerning when these researchers claim we don’t need to do any more research on this topic when they simply haven’t investigated the levels of alcohol and energy drink consumption at which trouble is likely to occur.</p>
<p>It’s especially worrisome that four out of five talks at special conference sessions on this topic have been made by industry-funded researchers. The same speakers have been funded to attend conferences around the world by a company with financial interest in the research outcomes. The frequent failure to disclose this fact raises questions about the use of research findings as image management.</p>
<p>There are two core issues of public health concern which need be investigated. First, is there an interaction between alcohol and energy drink consumption at higher levels of intoxication, as seen on our streets – for example, when people have had 10 drinks or have a BAC greater than .10?</p>
<p>And second, is there an interaction between a given level of alcohol use and the effects of higher levels of energy drink use – for example, between two and three standard cans?</p>
<p>Until we know the answers to these questions we shouldn’t be misleadingly reassured by laboratory studies which purport to show that energy drinks have no effects on intoxication.</p><img src="https://counter.theconversation.com/content/18271/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Peter Miller receives funding from Australian Research Council, grants from NSW Government, grants from National Drug Law Enforcement Research Fund, grants from Foundation for Alcohol Research and Education, grants from Cancer Council Victoria, grants from QLD government, grants from Australian Drug Foundation, other from Australasian Drug Strategy Conference, other from International Drug Policy Coalition, outside the submitted work. He is affiliated with the academic journal, Addiction.</span></em></p><p class="fine-print"><em><span>Wayne Hall receives funding from a National Health and Medical Research Council Australia Fellowship..</span></em></p>Heavy drinkers are mixing alcohol with energy drinks to enable them to drink longer and get more drunk. While the trend is concerning many public health researchers – because the risks remain unknown – others…Peter Miller, Principal Research Fellow, Deakin UniversityWayne Hall, Professor & Deputy Director (Policy) UQ Centre for Clinical Research, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/170912013-08-16T04:23:27Z2013-08-16T04:23:27ZWhat Australia should do to ensure research integrity<figure><img src="https://images.theconversation.com/files/29385/original/bzjrbq5n-1376621381.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Public confidence in medical research is vulnerable to attacks on its integrity.</span> <span class="attribution"><span class="source">Andrew Huff</span></span></figcaption></figure><p>Clinical trials of an experimental cancer drug being undertaken by the University of New South Wales were <a href="http://www.abc.net.au/news/2013-08-12/trials-of-skin-cancer-drug-dz13-suspended-amid-misconduct-claims/4881622">suspended</a> this week, after questions about the accuracy of some preliminary results were made public. </p>
<p><a href="http://www.couriermail.com.au/news/queensland/qut-reputation-at-risk-after-grant-application-and-research-mistakes/story-fnihsrf2-1226687129603">Allegations of inaccuracies</a> in the reporting of research results have also recently been raised at Queensland University of Technology. </p>
<p>In both instances, scientific publications have been retracted. And, in one, the <a href="https://theconversation.com/how-we-deal-with-alleged-research-misconduct-nhmrc-17101">institution potentially faces repaying</a> a sizeable research grant from the National Health and Medical Research Council (NHMRC). </p>
<p>Researchers at both institutions have stated that any errors were inadvertent, and investigations are ongoing. </p>
<p>But delays in disclosure and investigation of such cases raise questions about the effectiveness of medical research governance, and the independence of the existing regulatory system within universities and research institutes. </p>
<h2>The importance of integrity</h2>
<p>The importance of adequate regulation for maintaining the integrity of medical research cannot be overstated. </p>
<p>The <a href="http://www.nhmrc.gov.au/guidelines-publications">guidelines</a> regulating medical research in Australia are derived from <a href="http://www.nejm.org/doi/full/10.1056/NEJM199711133372006">information gathered</a> during the Nuremberg trials, on the conduct of Nazi doctors and their role in experimentation and medical research during the Second World War. </p>
<p>The profound disregard for the rights, dignity and very humanity of the participants experimented on by doctors during this period was so great that it was internationally acknowledged that not only should the <a href="http://www.un.org/en/documents/udhr/">Universal Declaration of Human Rights</a> be established, but that there should also be a code of conduct specifically regulating the way medical research is undertaken. </p>
<p>This document was referred to as the <a href="http://www.bmj.com/content/313/7070/1448.1">Nuremberg Code</a>, and it evolved into the 1964 <a href="http://www.wma.net/en/30publications/10policies/b3/">Helsinki Declaration</a> – an internationally accepted document attributable to the World Medical Association. </p>
<p>The human rights principles espoused in the document (self-determination and not subjection, without free consent, to medical or scientific experimentation) intersect with principles of law (bodily inviolability) and bioethics, including autonomy (the right to make one’s own choices) and non-maleficence (to do no harm).</p>
<h2>The risks of misconduct</h2>
<p>For researchers, the consequences of failing to comply with the guidelines include loss of reputation and opportunities for research funding. More broadly, research misconduct can affect public support, and undermine confidence in the integrity of whole fields of <a href="https://theconversation.com/rogues-or-respectable-how-climate-change-sceptics-spread-doubt-and-denial-1557">study</a>. </p>
<p>An example of this is the rise of the vaccination “debate”. Those opposed to the science of vaccination rely, in part, on a now-retracted paper by Andrew Wakefield, who has <a href="http://news.bbc.co.uk/2/hi/health/8695267.stm">been de-registered</a>.</p>
<p>Wakefield <a href="http://www.bmj.com/content/342/bmj.c7452">falsely</a> linked the measles-mumps-rubella (MMR) vaccine with autism and caused enormous harm to the credibility and implementation of this <a href="http://www.cdc.gov/vaccinesafety/00_pdf/CDCStudiesonVaccinesandAutism.pdf">safe</a> and highly-effective public health measure. </p>
<p>Ethically, researchers have an obligation to avoid exposing study participants to risk, or using them in flawed research. Clinical trials based on erroneous data violate participants’ trust in researchers. </p>
<p>Worse still is the violation of trust of participants who volunteer for trials based on erroneous preliminary data – the hope offered to them is both false and cruel, even if “inadvertent”.</p>
<h2>Creating a culture of disclosure</h2>
<p>Clinical medicine invests heavily in the prevention of adverse events and harm to patients. It promotes a culture of disclosure. </p>
<p>Prevention of, and open disclosure about, adverse events in clinical research, and the integrity of data should be no less important. Openness is vital to protecting integrity, and the rights and welfare of participants and researchers. </p>
<p>Australian medical research needs an independent regulator authorised to investigate complaints raised by whistle-blowers and others. </p>
<p>The NHMRC and Australian Research Council, as the bodies responsible for administering public funding for medical research in Australia, are not appropriate regulators for this task. Nor are universities and research institutions sufficiently independent to investigate staff or commercial partners and offshoots. </p>
<p>And <a href="http://www.abc.net.au/news/2013-08-13/calls-for-independent-body-to-oversee-scientific-research-in-aus/4884460">peer-review is also not fail-safe</a> as flawed science has been published.</p>
<p>Public perceptions that institutions may hide behind claims of commercial confidentiality or assurances that internal reviews have been fair, thorough and effective to avoid embarrassment and inconvenience do nothing to promote confidence in research integrity. </p>
<p>The University of New South Wales should be commended for not doing the above this time. </p>
<p>Improving Australia’s research integrity would require two key changes: legal moves to establish an independent regulator; and <a href="https://theconversation.com/clearing-the-air-why-more-retractions-are-good-for-science-6008ttp://example.com/">cultural change</a> that rewards early- and public- self-disclosure of errors, and <a href="https://theconversation.com/uk-researcher-sentenced-to-three-months-jail-for-faking-data-13619">punishment</a> of intentional or reckless misconduct after active independent investigation of allegations.</p>
<p>Without such changes, public confidence in medical research – and perceptions of the value of funding it from the public purse – remain vulnerable to attacks on its integrity. </p><img src="https://counter.theconversation.com/content/17091/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ruth Townsend is a legal member of the Australian Defence Force Human Research Ethics Committee and has previously served on other research ethics committees.</span></em></p><p class="fine-print"><em><span>Wendy Bonython is a legal member of the ADF Human Research Ethics Committe, and has previously dserved on other research ethics committees in the same capacity.</span></em></p><p class="fine-print"><em><span>Bruce Baer Arnold does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Clinical trials of an experimental cancer drug being undertaken by the University of New South Wales were suspended this week, after questions about the accuracy of some preliminary results were made public…Dr Ruth Townsend, Lecturer health law, ethics and human rights, Australian National UniversityBruce Baer Arnold, Assistant Professor, School of Law, University of CanberraWendy Bonython, Assistant Professor, School of Law- Torts, Health and Biotechnology, University of CanberraLicensed as Creative Commons – attribution, no derivatives.