tag:theconversation.com,2011:/au/topics/science-journals-30086/articles
Science journals – The Conversation
2024-02-23T13:50:45Z
tag:theconversation.com,2011:article/220635
2024-02-23T13:50:45Z
2024-02-23T13:50:45Z
Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create
<figure><img src="https://images.theconversation.com/files/577159/original/file-20240221-22-ttfzl.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2070%2C1449&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The pandemic spurred an increase in COVID-19 research, much of it with methodological holes.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/coronavirus-damage-royalty-free-image/1266909460">Andriy Onufriyenko/Moment via Getty Images</a></span></figcaption></figure><p>Early in the COVID-19 pandemic, researchers <a href="https://doi.org/10.1038/d41586-020-03564-y">flooded journals</a> with studies about the then-novel coronavirus. Many publications streamlined the peer-review process for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.</p>
<p>However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public health journals listed in Google Scholar, I found that many of these studies used <a href="https://doi.org/10.1162/qss_a_00257">poor quality methods</a>. <a href="https://doi.org/10.1186/s12874-020-01190-w">Several other</a> <a href="https://doi.org/10.1038/s41467-021-21220-5">reviews of</a> <a href="https://doi.org/10.1371/journal.pone.0241826">studies published</a> in medical journals have also shown that much early COVID-19 research used poor research methods.</p>
<p>Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar <a href="https://doi.org/10.3390/ijerph17051729">used data</a> from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely chance occurrences. However, this study has been cited <a href="https://scholar.google.com/citations?hl=en&vq=med_publichealth&view_op=list_hcore&venue=kEa56xlDDN8J.2023">over 11,000 times</a>.</p>
<p>A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not <a href="https://doi.org/10.1089/ees.2016.0223">strongly linked to research quality</a>, since researchers and journals can game and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.</p>
<h2>Methodology matters</h2>
<p>I am a <a href="https://scholar.google.com/citations?user=X1o1PaQAAAAJ&hl=en">public health researcher</a> with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement <a href="https://theconversation.com/misinformation-is-a-common-thread-between-the-covid-19-and-hiv-aids-pandemics-with-deadly-consequences-187968">spreading misinformation</a> about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.</p>
<p>The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to provide valid and useful information. </p>
<p>For example, researchers have <a href="https://www.sfu.ca/%7Epalys/Campbell&Stanley-1959-Exptl&QuasiExptlDesignsForResearch.pdf">known for decades</a> that for studies evaluating the effectiveness of an intervention, a <a href="https://www.britannica.com/science/control-group">control group</a> is needed to know whether any observed effects can be attributed to the intervention. </p>
<p><a href="https://doi.org/10.1111/dmcn.15719">Systematic reviews</a> pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will cover all the available evidence and tell a reader which is worth attending to and which is not.</p>
<p>Certain types of studies, such as one-time surveys of convenience samples that aren’t representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable <a href="https://doi.org/10.1017/S0033291720005127">caused a particular outcome</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WUErib-fXV0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Systematic reviews involve thoroughly identifying and extracting information from existing research.</span></figcaption>
</figure>
<p>All <a href="https://www.equator-network.org/">study designs have standards</a> that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.</p>
<p><a href="https://doi.org/10.1038/s41467-021-21220-5">Studies comparing</a> <a href="https://doi.org/10.1186/s12916-021-01920-x">COVID-19 papers</a> <a href="https://doi.org/10.1371/journal.pone.0241826">with non-COVID-19</a> papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to reporting standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against <a href="https://doi.org/10.1136/bmjebm-2020-111584">dredging the data</a> to find “statistically significant” results that could be selectively reported.</p>
<p>Such methodological problems were likely overlooked in the <a href="https://doi.org/10.1038/s41562-020-0911-0">considerably shortened</a> <a href="https://doi.org/10.1162/qss_a_00076">peer-review process</a> for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be <a href="https://doi.org/10.1038/s41467-021-21220-5">13 days, compared with 110 days</a> in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of <a href="https://doi.org/10.1162/qss_a_00257">about three weeks</a>.</p>
<h2>Publish-or-perish culture</h2>
<p>These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.</p>
<p>Journals tend to favor <a href="https://doi.org/10.1371/journal.pone.0010068">positive, “novel” findings</a>: that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an opportunity for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person with head in hands, elbows planted on stacks of paperwork and books littering a desk, glasses and laptop on the side" src="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many researchers feel pressure to publish papers in order to advance their careers.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/surrounded-by-work-royalty-free-image/637293916">South_agency/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Academics have worked in a <a href="https://doi.org/10.1089/ees.2016.0223">publish-or-perish</a> <a href="https://doi.org/10.1177/1745691612459058">incentive system</a> for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The <a href="https://theconversation.com/misinformation-is-a-common-thread-between-the-covid-19-and-hiv-aids-pandemics-with-deadly-consequences-187968">flood of mixed-quality COVID-19 information</a> afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.</p>
<p>Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of <a href="https://doi.org/10.1002/leap.1566">today’s online</a> <a href="https://doi.org/10.1001/jama.2023.3212">mega-journals</a> publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.</p>
<h2>Healthy criticism</h2>
<p>Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an <a href="https://doi.org/10.1057/palgrave.jors.2602573">open society</a> capable of solving complex problems such as a worldwide pandemic.</p>
<p>Publishing a large volume of misinformation disguised as science during a pandemic <a href="https://doi.org/10.1073/pnas.1912444117">obscures true and useful knowledge</a>. At worst, this can lead to bad public health practice and policy. </p>
<p>Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves <a href="https://doi.org/10.1371/journal.pmed.1001747">critically examining the quality</a> of a study’s designs, statistical methods, reproducibility and transparency, not the <a href="https://doi.org/10.1016/j.jclinepi.2021.05.018">number of times it has been cited</a> or tweeted about.</p>
<p>Science depends on a <a href="https://doi.org/10.1007/s10654-023-01049-6">slow, thoughtful and meticulous approach</a> to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two scientists pipetting liquids under a fume hood, with another scientist in the background examining a sample" src="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=423&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=423&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=423&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=532&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=532&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=532&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rigorous science requires careful deliberation and attention, not haste.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/female-scientist-drops-liquid-into-test-tube-royalty-free-image/127871289">Assembly/Stone via Getty Images</a></span>
</figcaption>
</figure>
<p>Public health heavily draws upon disciplines that are <a href="https://doi.org/10.1038/526182a">experiencing</a> <a href="https://doi.org/10.1177/1745691612462588">replication</a> <a href="https://doi.org/10.1371/journal.pmed.0020124">crises</a>, such as psychology, biomedical science and biology. It is similar to these disciplines <a href="https://doi.org/10.1146/annurev-statistics-031219-041104">in terms of its</a> incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.</p>
<p>Reexamining how the discipline rewards its scholars and assesses their scholarship can help it better prepare for the next public health crisis.</p><img src="https://counter.theconversation.com/content/220635/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dennis M. Gorman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Pressure to ‘publish or perish’ and get results out as quickly as possible has led to weak study designs and shortened peer-review processes.
Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/213107
2023-09-19T12:19:00Z
2023-09-19T12:19:00Z
Rising number of ‘predatory’ academic journals undermines research and public trust in scholarship
<p>Taxpayers fund a lot of <a href="https://theconversation.com/the-us-has-ruled-all-taxpayer-funded-research-must-be-free-to-read-whats-the-benefit-of-open-access-189466">university research</a> in the U.S., and these findings published in scholarly journals often produce major breakthroughs in medicine, vehicle safety, food safety, criminal justice, human rights and other topics that benefit the public at large. </p>
<p>The bar for publishing in a scholarly journal is often high. Independent experts diligently review and comment on submitted research – without knowing the names of the authors or their affiliated universities. They recommend whether a journal should accept an article or revise or reject it. The piece is then carefully edited before it is published. </p>
<p>But in a <a href="https://mdanderson.libanswers.com/faq/206446">growing number of cases</a>, these standards are not being upheld. </p>
<p>Some journals charge academics to publish their research – without first editing or scrutinizing the work with any ethical or editorial standards. These for-profit publications are <a href="https://DOI.org/10.1038/d41586-019-03759-y">often known as predatory journals</a> because they are publications that claim to be legitimate scholarly journals but prey on unsuspecting academics to pay to publish and often misrepresent their publishing practices. </p>
<p>There were an <a href="https://mdanderson.libanswers.com/faq/206446">estimated 996 publishers</a> that published over 11,800 predatory journals in 2015. That is roughly the same number of <a href="https://oaspa.org/">legitimate, open-access academic journals</a> – available to readers without charge and archived in a library supported by a government or academic institution – published around the same time. In 2021, another estimate said there were 15,000 <a href="https://blog.cabells.com/2021/09/01/mountain-to-climb/">predatory journals</a>.</p>
<p>This trend could weaken public confidence in the validity of research on everything from health and agriculture to economics and journalism.</p>
<p>We are <a href="https://scholar.google.com/citations?user=xXJ-XxEAAAAJ&hl=en">scholars of journalism</a> and <a href="https://scholar.google.com/citations?user=LyEoOLQAAAAJ&hl=en&oi=ao">media ethics</a> who see the negative effects predatory publishing is having on our own fields of journalism and mass communication. We believe it is important for people to understand how this problem affects society more broadly. </p>
<p>In most cases, <a href="https://doi.org/10.1038/d41586-020-00031-6">the research published in these journals</a> is mundane and does not get cited by other academics. But in other cases, <a href="https://www.nature.com/articles/d41586-021-00239-0">poorly executed research</a> – <a href="https://www.chemistryworld.com/news/report-calls-for-urgent-action-to-tackle-predatory-publishers/4015520.article">often on science</a> – could mislead scientists and produce untrue findings. </p>
<h2>Misleading practices</h2>
<p>Publishing in journals is considered an essential part of being an academic because professors’ responsibilities generally include contributing new knowledge and ways of solving problems in their research fields. Publishing research is often a key part of academics keeping their jobs, getting promoted or receiving tenure – in an old phrase from academia, you publish or perish. </p>
<p>Predatory publishers often use deception to get scholars to submit their work. That includes false <a href="https://www.elsevier.com/reviewers/what-is-peer-review">promises of peer review,</a> which is a process that involves independent experts scrutinizing research. Other tactics include lack of transparency about charging authors to publish their research. </p>
<p>While fees vary, one publisher told us during our research that its going rate is $60 per printed page. An author reported paying $250 to publish in that same outlet. In contrast, legitimate journals charge a very small amount, or no fee at all, to publish manuscripts after editors and other independent experts closely review the work.</p>
<p>These kinds of journals – about 82.3% of which are located in poor countries, including <a href="https://doi.org/10.1016/j.joi.2018.10.008">India, Nigeria and Pakistan</a> – can prey on junior faculty who are under intense pressure from their universities to publish research. </p>
<p>Low-paid young faculty and doctoral students, who may have limited English language proficiency and poor research and writing skills, are also especially vulnerable to publishers’ aggressive marketing, mostly via email. </p>
<p>Authors who publish in fraudulent journals may add these articles to their resumes, but such articles are rarely read and cited by other scholars, as is the norm with articles in legitimate journals. <a href="https://doi.org/10.1177/1077695820947259">In some instances</a>, articles are never published, despite payment. </p>
<p>Predatory publishers may also have an unusually large breadth of topics they cover. For example, we examined one Singapore-based company called PiscoMed Publishing, which boasts 86 journals in fields spanning religious studies and Chinese medicine to pharmacy and biochemistry. Nonpredatory publishers tend to be more focused in the breadth of their topics. </p>
<p>The Conversation contacted all of the journals named in this article for comment and did not receive a response regarding their work standards and ethics. </p>
<p>Another journal, the <a href="http://www.ijhssnet.com/">International Journal of Humanities and Social Science</a>, says it publishes in about 40 fields, including criminology, business, international relations, linguistics, law, music, anthropology and ethics. We received an email from this journal, signed by its chief editor, who is listed as being affiliated with a U.S. university. </p>
<p>But when we called this university, we were told that the school does not employ anyone with that name. Another person at the school’s Art Department said that the editor in question no longer works there.</p>
<p>It is extremely difficult for people reading a study, or watching a news segment about a particular study, to recognize that it appeared in a predatory journal. </p>
<p>In some instances, these journals’ titles are almost identical to titles of authentic ones or have <a href="https://beallslist.net">generic names like</a> “Academic Sciences” and “BioMed Press.”</p>
<h2>Scholars deceived</h2>
<p>In <a href="https://doi.org/10.3138/jsp-2021-0023">a 2021 study</a>, we surveyed and interviewed scholars in North America, Africa, Asia, Australia and Europe listed as editorial board members or reviewers for two <a href="https://www.piscomed.com/">predatory journalism</a> and mass communication journals. </p>
<p>One company, <a href="https://www.davidpublisher.com">David Publishing</a>, gives a Delaware shipping and mailbox store as its address and uses a Southern California phone number. It says it publishes 52 journals in 36 disciplines, including philosophy, sports science and tourism. </p>
<p>Some scholars told us they were listed as authors in these journals without permission. One name still appeared as an author several years after the scholar’s death.</p>
<p>Our latest, forthcoming study conducted in 2023 surveyed and interviewed a sample of authors of 504 articles in one of those predatory journals focused on journalism and mass communication. </p>
<p>We wanted to learn why these authors – ranging from graduate students to tenured full professors – chose to submit their work to this journal and what their experience was like. </p>
<p>While most authors come from poor countries or other places such as Turkey and China, others listed affiliations with top American, Canadian and European universities. </p>
<p>Many people we contacted were unaware of the journal’s predatory character. One author told us of learning about the journal’s questionable practices only after reading an online posting that “warned people not to pay.” </p>
<h2>A lack of concern</h2>
<p>Some people we spoke with didn’t express concern about the ethical implications of publishing in a predatory journal, including dishonesty with authors’ peers and universities and potential deception of research funders. We have found that some authors invite colleagues to help pay the fees in exchange for putting their names on an article, even if they did none of the research or writing. </p>
<p>In fact, we heard many reasons for publishing in such journals. </p>
<p>These included long waits for peer review and <a href="https://www.tandfonline.com/action/journalInformation?show=journalMetrics&journalCode=rjop20">high rejection rates</a> from <a href="https://www.tandfonline.com/action/journalInformation?show=journalMetrics&journalCode=rjos20">reputable journals</a>. </p>
<p>In other cases, academics said that their universities were more concerned with how much they publish, rather than the quality of the publication that features their work. </p>
<p>“It was very important for me to have it at that time. I never paid again. But I got my promotion. It was recognized by my institution as a full publication. I profited … and it did the job,” one author from the Middle East told us in an interview. </p>
<h2>Why it matters</h2>
<p>Predatory publishing creates a major obstacle in the drive to ensure that new research on critical topics is well-founded and truthful. </p>
<p>This can have implications in health and medical research, among other areas. As one <a href="https://doi.org/10.2519/jospt.2017.0101">health care scholar explained</a>, there is a risk that scientists could incorporate erroneous findings into their clinical practices. </p>
<p>High standards are crucial across all areas of research. Policymakers, governments, educators, students, journalists and others should be able to rely on credible and accurate research findings in their decision making, without constantly double-checking the validity of a source that falsely purports to be reputable.</p><img src="https://counter.theconversation.com/content/213107/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
In some cases, it can be difficult for academics to know which journals are not credible – but other times, people feel pressure to publish in these publications.
Eric Freedman, Professor of Journalism and Chair, Knight Center for Environmental Journalism, Michigan State University
Bahtiyar Kurambayev, Associate Professor of Media, KIMEP University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/189466
2022-08-29T20:03:06Z
2022-08-29T20:03:06Z
The US has ruled all taxpayer-funded research must be free to read. What’s the benefit of open access?
<figure><img src="https://images.theconversation.com/files/481466/original/file-20220829-50806-wh9yvf.jpg?ixlib=rb-1.1.0&rect=1276%2C815%2C3491%2C2933&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/6ywyo2qtaZ8">Eugenio Mazzone/Unsplash</a></span></figcaption></figure><p>Last week, the United States announced an <a href="https://www.whitehouse.gov/ostp/news-updates/2022/08/25/breakthroughs-for-alldelivering-equitable-access-to-americas-research/">updated policy guidance</a> on open access that will substantially expand public access to science not just in America, but worldwide.</p>
<p>As per the guidance, all US federal agencies must put in place policies and plans so anyone anywhere can immediately and freely access the peer-reviewed publications and data arising from research they fund.</p>
<p>The policies need to be in place by the end of 2025, according to President Biden’s White House Office of Science and Technology Policy (OSTP).</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1563247442065141761"}"></div></p>
<h2>A substantial step</h2>
<p>The new guidance builds <a href="https://obamawhitehouse.archives.gov/blog/2013/02/22/expanding-public-access-results-federally-funded-research">on a previous memo</a> issued by then president Barack Obama’s office in 2013. That one only applied to the largest funding agencies and, in a crucial difference, allowed for a 12-month delay or embargo for the publications to be available.</p>
<p>Now we’re seeing a substantial step forward in a lengthy effort – extending back to the <a href="https://www.budapestopenaccessinitiative.org/">beginning of this century</a> – to open up access to the world’s research.</p>
<p>We can expect it to act as a catalyst for more policy changes globally. It’s also especially timely given UNESCO’s <a href="https://www.unesco.org/en/natural-sciences/open-science">Open Science Recommendation</a> adopted in 2021. The new OSTP guidance emphasises the primary intention is for the US public to have immediate access to research funded by their tax dollars.</p>
<p>But thanks to the conditions for opening up said research, people worldwide will benefit.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/busting-the-top-five-myths-about-open-access-publishing-14792">Busting the top five myths about open access publishing</a>
</strong>
</em>
</p>
<hr>
<h2>A discriminatory system</h2>
<p>It might seem obvious that with our ubiquitous internet access, there should already be immediate open access to publicly funded research. But that isn’t the case for most published studies.</p>
<p>Changing the system has been challenging, not least because academic publishing is dominated by a small number of <a href="https://dx.plos.org/10.1371/journal.pone.0127502">highly profitable and powerful publishers</a>.</p>
<p>Open access matters for both the public and academics, as the fast-moving emergency of the COVID-19 pandemic amply demonstrated.</p>
<p>Even academics at well-funded universities can mostly only access journals their universities subscribe to – and no institution can afford to subscribe to everything published. Last year, estimates suggest some 2 million research articles were published. People outside a university – in a small company, a college, a GP practice, a newsroom, or citizen scientists – have to pay for access.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1390798132632596488"}"></div></p>
<p>As the new guidance notes, this lack of public access leads to “discrimination and structural inequalities… [that] prevent some communities from reaping the rewards of the scientific and technological advancements”. Furthermore, lack of access leads to mistrust in research.</p>
<p>The accompanying <a href="https://www.whitehouse.gov/wp-content/uploads/2022/08/08-2022-OSTP-Public-Access-Memo.pdf">OSTP memo</a> highlights that future policies should support scientific and research integrity, with the aim of increasing public trust in science.</p>
<p>COVID-19 is not the first rapid global emergency, and it won’t be the last. For example, doctors not being able <a href="https://www.nytimes.com/2015/04/08/opinion/yes-we-were-warned-about-ebola.html">to access research on Ebola</a> may have directly led to a 2015 outbreak in West Africa.</p>
<p>In the early stages of the COVID-19 pandemic, the <a href="https://trumpwhitehouse.archives.gov/wp-content/uploads/2020/03/COVID19-Open-Access-Letter-from-CSAs.Equivalents-Final.pdf">White House led calls</a> for publishers to make COVID-19 publications open to all. Most (but not all) did and that call led to one of the biggest databases of openly available papers ever assembled – the <a href="https://allenai.org/data/cord-19">CORD-19 database</a>.</p>
<p>But not all of those COVID-19 papers will be permanently openly available, since some publishers put conditions on their accessibility. With the current spread of monkeypox, we are potentially facing another global emergency. In August this year, the White House once again <a href="https://www.whitehouse.gov/ostp/news-updates/2022/08/04/a-call-for-public-access-to-monkeypox-related-research-and-data/">called for publishers</a> to make relevant research open.</p>
<p>The OSTP guidance will finally mean that, at least for US federally funded research, the time of governments having to repeatedly call for publishers to make research open is over.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-just-available-but-also-useful-we-must-keep-pushing-to-improve-open-access-to-research-86058">Not just available, but also useful: we must keep pushing to improve open access to research</a>
</strong>
</em>
</p>
<hr>
<h2>The situation in Australia</h2>
<p>In Australia, we don’t yet have a national approach to open access. The two national research funders, the <a href="https://www.nhmrc.gov.au/about-us/resources/open-access-policy">NHMRC</a> and <a href="https://www.arc.gov.au/about-arc/program-policies/open-access-policy">ARC</a>, have policies in place similar to the 2013 US guidance of a 12-month embargo period. The NHMRC consulted last year on an immediate open access policy.</p>
<p>All Australian universities provide access to their research through their repositories, although that access varies depending on individual universities’ and publishers’ policies. Most recently, <a href="https://caul.libguides.com/read-and-publish">the Council of Australian University Librarians negotiated</a> a number of consortial open access deals with publishers. Cathy Foley, Australia’s Chief Scientist, is also considering a <a href="https://www.chiefscientist.gov.au/Dr-Cathy-Foley-delivers-National-Press-Club-Address">national model for open access</a>.</p>
<p>So what’s next? As expected, perhaps, some of the larger publishers are already <a href="https://www.nytimes.com/2022/08/25/us/white-house-federally-funded-research-access.html">making the case</a> for more funding for them to support this policy. It will be important that this policy doesn’t lead to a financial bonanza for these already very profitable companies – nor a consolidation of their power.</p>
<p>Rather, it would be good to see financial support for innovation in publishing, and a recognition that we need a <a href="https://www.coar-repositories.org/news-updates/fostering-bibliodiversity-in-scholarly-communications-a-call-for-action/">diversity of approaches</a> to support an academic publishing system that works for the benefit of all.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/making-australian-research-free-for-everyone-to-read-sounds-ideal-but-the-chief-scientists-open-access-plan-isnt-risk-free-171389">Making Australian research free for everyone to read sounds ideal. But the Chief Scientist's open-access plan isn't risk-free</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/189466/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Virginia Barbour is the Director Of Open Access Australasia, which advocates for Open Access in Australia and Aotearoa New Zealand.</span></em></p>
Lack of free access to research leads to discrimination, both in academia and for us all. The new guidance from the US is a huge step in the right direction.
Virginia Barbour, Director, Open Access Australasia, Queensland University of Technology
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/153708
2021-03-15T12:56:48Z
2021-03-15T12:56:48Z
6 tips to help you detect fake science news
<figure><img src="https://images.theconversation.com/files/389103/original/file-20210311-20-90hym5.jpg?ixlib=rb-1.1.0&rect=781%2C889%2C4508%2C3098&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If what you're reading seems too good to be true, it just might be.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/dhCGbPx8wpk">Mark Hang Fung So/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>I’m a professor of chemistry, have a Ph.D. and <a href="https://scholar.google.com/citations?user=RpiSPiwAAAAJ&hl=en&oi=ao">conduct my own scientific research</a>, yet when consuming media, even I frequently need to ask myself: “Is this science or is it fiction?”</p>
<p>There are plenty of reasons a science story might not be sound. Quacks and charlatans take advantage of the complexity of science, some content providers can’t tell bad science from good and some politicians peddle fake science to support their positions.</p>
<p>If the science sounds too good to be true or too wacky to be real, or very conveniently supports a contentious cause, then you might want to check its veracity.</p>
<p>Here are six tips to help you detect fake science.</p>
<h2>Tip 1: Seek the peer review seal of approval</h2>
<p>Scientists rely on journal papers to share their scientific results. They let the world see what research has been done, and how.</p>
<p>Once researchers are confident of their results, they write up a manuscript and send it to a journal. Editors forward the submitted manuscripts to at least two external referees who have expertise in the topic. These reviewers can suggest the manuscript be rejected, published as is, or sent back to the scientists for more experiments. That process is called “peer review.”</p>
<p>Research published in <a href="https://undsci.berkeley.edu/article/howscienceworks_16">peer-reviewed journals</a> has undergone rigorous quality control by experts. Each year, about <a href="https://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf">2,800 peer-reviewed journals</a> publish roughly 1.8 million scientific papers. The body of scientific knowledge is constantly evolving and updating, but you can trust that the science these journals describe is sound. Retraction policies help correct the record if mistakes are discovered post-publication.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man in white coat in lab at laptop" src="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Peer-reviewed’ means other scientific experts have checked the study over for any problems before publication.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/scientist-using-computer-in-laboratory-royalty-free-image/1194829395">ljubaphoto/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Peer review takes months. To get the word out faster, scientists sometimes post research papers on what’s called a preprint server. These often have “RXiv” – pronounced “archive” – in their name: MedRXiv, BioRXiv and so on. These articles have not been peer-reviewed and so are <a href="https://doi.org/10.1080/10410236.2020.1864892">not validated by other scientists</a>. Preprints provide an opportunity for other scientists to evaluate and use the research as building blocks in their own work sooner.</p>
<p>How long has this work been on the preprint server? If it’s been months and it hasn’t yet been published in the peer-reviewed literature, be very skeptical. Are the scientists who submitted the preprint from a reputable institution? During the COVID-19 crisis, with researchers scrambling to understand a dangerous new virus and rushing to develop lifesaving treatments, preprint servers have been littered with immature and unproven science. <a href="https://arstechnica.com/science/2020/05/a-lot-of-covid-19-papers-havent-been-peer-reviewed-reader-beware/">Fastidious research standards have been sacrificed for speed</a>.</p>
<p>A last warning: Be on the alert for research published in what are called <a href="https://www.nature.com/articles/d41586-019-03759-y">predatory journals</a>. They don’t peer-review manuscripts, and they charge authors a fee to publish. Papers from any of the <a href="https://guides.library.yale.edu/c.php?g=296124&p=1973764">thousands of known predatory journals</a> should be treated with strong skepticism.</p>
<h2>Tip 2: Look for your own blind spots</h2>
<p>Beware of biases in your own thinking that might predispose you to fall for a particular piece of fake science news.</p>
<p>People give their own memories and experiences more credence than they deserve, making it hard to accept new ideas and theories. Psychologists call this quirk the availability bias. It’s a useful built-in shortcut when you need to make quick decisions and don’t have time to critically analyze lots of data, but it messes with your fact-checking skills.</p>
<p>In the fight for attention, sensational statements beat out unexciting, but more probable, facts. The tendency to overestimate the likelihood of vivid occurrences is called the salience bias. It leads people to mistakenly believe overhyped findings and trust confident politicians in place of cautious scientists.</p>
<p>A confirmation bias can be at work as well. People tend to give credence to news that fits their existing beliefs. This tendency helps climate change denialists and anti-vaccine advocates believe in their causes in spite of the scientific consensus against them.</p>
<p>Purveyors of fake news know the weaknesses of human minds and try to take advantage of these natural biases. <a href="https://www.huffpost.com/entry/how-to-overcome-cognitive-bias-and-use-it-to-your-advantage_b_5900fff3e4b00acb75f1844f">Training can help you</a> <a href="https://hbr.org/2015/05/outsmart-your-own-biases">recognize and overcome</a> your own cognitive biases.</p>
<h2>Tip 3: Correlation is not causation</h2>
<p>Just because you can see a relationship between two things doesn’t necessarily mean that one causes the other.</p>
<p>Even if surveys find that people who live longer drink more red wine, it doesn’t mean a daily glug will extend your life span. It could just be that red-wine drinkers are wealthier and have better health care, for instance. Look out for this error in nutrition news.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="gloved hand holds a mouse" src="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What works well in rodents might not work at all in you.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/face-of-tiny-white-mouse-peeps-out-royalty-free-image/157440932">sidsnapper/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>Tip 4: Who were the study’s subjects?</h2>
<p>If a study used human subjects, check to see whether it was placebo-controlled. That means some participants are randomly assigned to get the treatment – like a new vaccine – and others get a fake version that they believe is real, the placebo. That way researchers can tell whether any effect they see is from the drug being tested. </p>
<p>The best trials are also double blind: To remove any bias or preconceived ideas, neither the researchers nor the volunteers know who is getting the active medication or the placebo.</p>
<p>The size of the trial is important too. When more patients are enrolled, researchers can identify safety issues and beneficial effects sooner, and any differences between subgroups are more obvious. Clinical trials can have thousands of subjects, but some scientific studies involving people are much smaller; they should address how they’ve achieved the statistical confidence they claim to have.</p>
<p>Check that any health research was actually done on people. Just because a certain drug works <a href="https://twitter.com/justsaysinmice">in rats or mice</a> does not mean it will work for you.</p>
<h2>Tip 5: Science doesn’t need ‘sides’</h2>
<p>Although a political debate requires two opposing sides, a scientific consensus does not. When the media interpret objectivity to mean equal time, it undermines science. </p>
<h2>Tip 6: Clear, honest reporting might not be the goal</h2>
<p>To get their audience’s attention, morning shows and talk shows need something exciting and new; accuracy may be less of a priority. Many science journalists are doing their best to accurately cover new research and discoveries, but plenty of science media are better classified as entertaining rather than educational. <a href="https://www.bmj.com/content/349/bmj.g7346">Dr. Oz</a>, Dr. Phil and Dr. Drew should not be your go-to medical sources. </p>
<p>Beware of medical products and procedures that sound too good to be true. Be skeptical of testimonials. Think about the key players’ motivations and who stands to make a buck.</p>
<p>If you’re still suspicious of something in the media, make sure the news being reported reflects what the research actually found by <a href="https://www.sciencemag.org/careers/2016/03/how-seriously-read-scientific-paper">reading the journal article itself</a>.</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p><img src="https://counter.theconversation.com/content/153708/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marc Zimmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Whenever you hear about a new bit of science news, these suggestions will help you assess whether it’s more fact or fiction.
Marc Zimmer, Professor of Chemistry, Connecticut College
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/146394
2020-09-17T04:27:27Z
2020-09-17T04:27:27Z
‘Science is political’: Scientific American has endorsed Joe Biden over Trump for president. Australia should take note
<p>In an unprecedented step, prestigious science publication <a href="https://www.britannica.com/topic/Scientific-American">Scientific American</a> has <a href="https://www.scientificamerican.com/article/scientific-american-endorses-joe-biden/">launched</a> a scathing attack on President Donald Trump and endorsed his opponent, Democratic candidate Joe Biden, in the upcoming US election. It’s the first presidential endorsement in the magazine’s 175-year history.</p>
<p>To this, we say: about bloody time! As we’ve <a href="https://theconversation.com/gentlemens-rules-are-out-scientists-its-time-to-unleash-the-beast-729">noted before</a>:</p>
<blockquote>
<p>Science is political. The science we do is inherently shaped by the funding landscape of government and the problems and issues of society. This means that to have any influence on how science is organised and funded in Australia (or the US or any other country), we as scientists and science communicators must act in ways that matter in the arena of politics.</p>
</blockquote>
<p>It’s now more critical than ever, as the editors at Scientific American clearly lay out, that the people who are actually knowledgeable about the world’s crises <a href="https://theconversation.com/when-politicians-listen-to-scientists-we-all-benefit-74443">speak out and represent</a> that knowledge (or “<a href="https://medium.com/the-science-collective/individual-intelligence-vs-collective-wisdom-ddce5fe42ed">collective wisdom</a>”) in public, out loud and with their names attached.</p>
<p><a href="https://edition.cnn.com/videos/politics/2020/09/14/trump-wildfires-climate-change-coronavirus-collins-dnt-lead-vpx.cnn">Under Trump</a>, science isn’t just ignored. It is lampooned and <a href="https://www.ucsusa.org/resources/attacks-on-science">directly attacked</a>, especially on issues such as <a href="https://www.bbc.com/news/world-us-canada-46351940">climate change</a> and the coronavirus pandemic. This actively threatens <a href="https://theintercept.com/2020/04/02/is-donald-trump-criminally-responsible-for-coronavirus-deaths/">the lives</a> (and livelihoods) of not just millions of Americans, but countless others around the world. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/vfLZOkn0chc?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Throughout the coronavirus pandemic, Trump has shown blatant disregard for scientific recommendations and has actively peddled misinformation, such as when he suggested UV light could be used to treat patients.</span></figcaption>
</figure>
<h2>Respect the messenger</h2>
<p>In the past, <a href="https://theconversation.com/distrust-of-experts-happens-when-we-forget-they-are-human-beings-76219">it has been suggested</a> scientists who comment beyond their specific, narrow sphere of reach by delving into politics are tainting their credibility – perhaps even behaving unethically. </p>
<p>But as we now stare down the barrel of an ongoing global pandemic (and relentless climate change continuing in the background), to remain quiet on the politics is not just unethical, but actively dangerous.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/5-big-environment-stories-you-probably-missed-while-youve-been-watching-coronavirus-135364">5 big environment stories you probably missed while you've been watching coronavirus</a>
</strong>
</em>
</p>
<hr>
<p>The argument that science is somehow tainted by offering policy or political opinions is an idea <a href="https://theconversation.com/distrust-of-experts-happens-when-we-forget-they-are-human-beings-76219">whose time has long gone</a>. </p>
<p>Who is better placed to add valuable weight to public debates about the key problems we’re facing, than those who represent the voice of evidence, reason and debate (such as Scientific American)? </p>
<p>As one of us <a href="https://jcom.sissa.it/archive/16/01/JCOM_1601_2017_C01">has previously argued</a>, in Australia we should encourage scientists and science communicators to:</p>
<blockquote>
<p>Become more active in challenging the status quo, or to help support those who wish to by engendering a professional environment that encourages risk-taking and speaking out in public about critical social issues.</p>
</blockquote>
<h2>It’s the principle, not the votes</h2>
<p>Scientific American is not entirely alone in pushing for the involvement of scientists in public policy and action. Other reputable publications have taken similar stances in the past. </p>
<p>In 2017, <a href="https://www.nature.com/news/why-researchers-should-resolve-to-engage-in-2017-1.21236">Nature argued</a> “debates over climate change and genome editing present the need for researchers to venture beyond their comfort zones to engage with citizens”. Earlier in 2012, <a href="https://www.nature.com/news/a-vote-for-science-1.11634">Nature explicitly endorsed</a> Democratic presidential candidate Barack Obama over Republican challenger Mitt Romney. </p>
<p>In Australia, our news publications have a tradition of <a href="https://www.theguardian.com/media/2018/sep/20/very-australian-coup-murdoch-turnbull-political-death-news-corps">endorsing political parties</a> at federal elections, but our science publishing landscape has typically remained agnostic. </p>
<p>Peak bodies such as the Australian Academy of Science, and Science and Technology Australia, have <a href="https://www.science.org.au/academy-newsletter/australian-academy-science-newsletter-104/academy-urges-political-parties">commented</a> on the political decision-making process, but have rarely been so forthright as the Scientific American’s recent editorial.</p>
<p>Not only should scientists take a stand, they should also be encouraged and professionally acknowledged for it. </p>
<p>Scientists as citizens have the right to advocate for political positions and figures that support the best possible evidence. In fact, when it comes to matters as serious as COVID-19 and climate change, we believe they have an obligation to.</p>
<p>Scientific American’s intervention may not impact votes, but that’s not the point. The point is it’s crucial for people who believe in knowledge and expertise to stand up and call out misinformation for what it is. To do less is to accept the current state. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Editor in Chief of Scientific American Laura Helmuth speaking to an audience." src="https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=501&fit=crop&dpr=1 754w, https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=501&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/358510/original/file-20200917-14-1nsaqow.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=501&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Laura Helmuth is the ninth and current Editor in Chief of the Scientific American magazine. She was appointed to the role in April this year.</span>
<span class="attribution"><a class="source" href="https://twitter.com/webmz_/status/1249754585822121990">@webmz_/Twitter</a></span>
</figcaption>
</figure>
<h2>Australia’s work in progress</h2>
<p>Nonetheless, many scientists in Australia rely on government funding. This can make it difficult to speak up when legitimate evidence clashes with the orientation of the government of the day. Confronted with the possible loss of funding, what can a scientist do? </p>
<p>There’s no perfect solution. Many may feel the risks of speaking are too great. For many, they will be. </p>
<p>In such cases, scientists could perhaps look for intermediaries to make their case on their behalf – whether these are trustworthy journalists, or publicly visible academics like us. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/research-reveals-shocking-detail-on-how-australias-environmental-scientists-are-being-silenced-140026">Research reveals shocking detail on how Australia's environmental scientists are being silenced</a>
</strong>
</em>
</p>
<hr>
<p>In the long term, defending those who have gone out of their way to act responsibly will help. The more this becomes normal, the more likely it will become the norm. But it’s also an unfortunate reality that change rarely occurs without discomfort. </p>
<p>When it comes to truly world-shaking crises like COVID-19 and climate change, scientists are political citizens like everyone else. And just like everyone else, they need to weigh the price of action against the price of inaction. </p>
<p>Speaking out can’t always be someone else’s job.</p><img src="https://counter.theconversation.com/content/146394/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rod Lamberts has previously received funding from the ARC.</span></em></p><p class="fine-print"><em><span>Will J Grant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Trump doesn’t just ignore science, he attacks it. Australia’s experts have an obligation to speak out on crises such as the coronavirus pandemic, even if it means picking a side in our politics.
Rod Lamberts, Deputy Director, Australian National Centre for Public Awareness of Science, Australian National University
Will J Grant, Senior Lecturer, Australian National Centre for the Public Awareness of Science, Australian National University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/116987
2019-05-27T19:44:29Z
2019-05-27T19:44:29Z
Misreporting the science of lab-made organs is unethical, even dangerous
<figure><img src="https://images.theconversation.com/files/276525/original/file-20190527-40038-9nnn0t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">You'll be waiting a while for functional 3D-printed human organs. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/blur-image-patients-hospital-waiting-see-1142067620?src=uzFlIBHcOL_oVrddBMpR5g-1-15">from www.shutterstock.com</a></span></figcaption></figure><p>I work in the field of bioprinting, where the aim is to build biological tissues by printing living cells into 3D structures.</p>
<p>Last month I found my Facebook news feed plastered with an amazing story about “the first 3D printed heart using a patient’s own cells”. A <a href="https://www.washingtonpost.com/video/national/health-science/researchers-create-3-d-printed-heart-using-patients-cells/2019/04/17/e832e463-a81e-44f5-a5e8-7db9fc9c6f94_video.html">video</a> showed a beautiful, healthy-looking heart apparently materialising inside a vat of pinkish liquid.</p>
<p>Big news. According to an <a href="https://wiley.altmetric.com/details/59043129">impact tracking algorithm</a>, the story has been picked up by 145 news outlets, tweeted 2,390 times to 3.8 million followers (as of May 27, 2019). Articles on Facebook have at least 13,000 shares, and videos about the story have been viewed well over 3 million times.</p>
<p>Unfortunately, many of these media reports don’t match up well with the original science.</p>
<p>Over-reporting of medical science is <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/bioe.12414">unethical</a>, and occasionally dangerous. It’s a problem all of us who work in the creation and telling of science can act to fix. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/titanium-is-the-perfect-metal-to-make-replacement-human-body-parts-115361">Titanium is the perfect metal to make replacement human body parts</a>
</strong>
</em>
</p>
<hr>
<h2>How they printed a ‘heart’</h2>
<p>In the original printed “heart” <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/advs.201900344">scientific paper</a>, Israeli scientists describe how they built on their own <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/adma.201803895">earlier work</a> on bio-inks (printable materials and cells) to create 3D structures in the laboratory. The main focus was to print a square “patch” of heart cells and blood-vessels using a “personalised” bio-ink; one where all of the cells and materials came from a particular patient. This is important because bio-inks typically contain some synthetic or animal-derived materials.</p>
<p>As a final flourish the team also printed the cells into a thumbnail-sized, heart shape. The text of the original paper clearly states the printed heart-shaped structure is not a real heart, and lacks most of the features required to make a heart work. But, along with those striking visuals, this is the aspect of the work that helped the paper become such a media hit. </p>
<p>This might sound like the envious griping of a rival scientist. However, I’m not criticising the science. This is impressive work – the cardiac patches may indeed turn out be an important development in the field. </p>
<p>I’m more worried about media reports giving the impression that our field of research is far more advanced than it is. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/0NmWOHuy-o8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Your heart is a really complicated organ.</span></figcaption>
</figure>
<h2>When medical research is overplayed</h2>
<p>Sensationalism is <a href="https://www.nature.com/news/science-journalism-can-be-evidence-based-compelling-and-wrong-1.21591">rife in science journalism</a>. And the 3D bioprinting field is interesting in particular, as it is currently fuelled by a “perfect storm” of hype: it builds on the wider buzz around 3D printing, is deceptively easy to understand, and blends ideas of <a href="https://www.youtube.com/watch?v=zc6xBFZbrTc">science fiction</a> with potential impact in real health outcomes. </p>
<p>There are other recent examples of sensationalised reporting in the bioprinting field. </p>
<p>For example, Wake Forest University had to issue a <a href="https://web.archive.org/web/20110310061956/http://www.wfubmc.edu/Research/WFIRM/Media-Reports-on-Kidney-Printing-Inaccurate.htm">clarification notice</a> following reports its scientist Anthony Atala had <a href="https://www.the-scientist.com/features/organs-on-demand-38787">“printed” a human kidney live on stage</a></p>
<p>In December 2015, news articles announced that a 14-year old boy had become the first human patient to be implanted with a <a href="https://www.huffingtonpost.co.uk/entry/3d-printed-nose_n_5685835ee4b06fa68882578b">“3D printed nose”</a>. In reality, 3D printing was only used to make a template to help the surgeon piece together pieces of <a href="http://www.prweb.com/releases/2015/12/prweb13120518.htm">donor cartilage into the correct shape</a>.</p>
<p>We’re left with the impression that 3D bioprinting is a mature, clinically available technology, when currently it is not. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/edible-seaweed-can-be-used-to-grow-blood-vessels-in-the-body-112618">Edible seaweed can be used to grow blood vessels in the body</a>
</strong>
</em>
</p>
<hr>
<h2>What’s the harm in a bit of hype?</h2>
<p>There are numerous <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/bioe.12414">ethical downsides</a> linked with over-enthusiastic portrayals of bioprinting in the media. </p>
<p>The problem is, mass media is one of the most important <a href="https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020215">sources of health and medical information for the general public</a>, especially prospective patients. </p>
<p>Positive portrayals of a novel technology in the media can <a href="https://europepmc.org/abstract/med/20866017">affect patient consent to undergo treatment</a> and can even prompt prospective participants <a href="https://ascopubs.org/doi/abs/10.1200/JCO.2002.04.084">to request enrolment in clinical trials</a>.</p>
<p>I’ve seen this myself. Whenever <a href="https://www.biofab3d.org/">our own</a> research is reported, particularly on television, the next morning I get phonecalls from people who want to sign up for a particular treatment. On TV the message is rarely communicated that we are still at an experimental stage, with human trials still years away.</p>
<p>In the worst case, the buzz around new technology can provide an opportunity for unscrupulous charlatans, such as the cosmetic surgeon who reportedly sold an unapproved stem-cell technology in Beverley Hills. One patient ended up with <a href="https://www.scientificamerican.com/article/stem-cell-cosmetics/">fragments of bone in her eyelid</a>. </p>
<p><a href="https://www.newscientist.com/article/dn20671-man-receives-worlds-first-synthetic-windpipe/">Media reports</a> of infamous thoracic surgeon Paolo Macchiarini’s implantation of a “<a href="http://healthland.time.com/2012/01/13/cancer-patient-receives-a-man-made-windpipe/">synthetic trachea</a>” arguably provided him a platform to accelerate his <a href="http://ki.se/sites/default/files/karolinska_institutet_and_the_macchiarini_case_summary_in_english_and_swedish.pdf">research program</a>. Seven of the nine patients who received one of his synthetic trachea transplants have <a href="http://www.bbc.com/news/magazine-37311038">since died</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/whose-hearts-livers-and-lungs-are-transplanted-in-china-origins-must-be-clear-in-human-organ-research-108077">Whose hearts, livers and lungs are transplanted in China? Origins must be clear in human organ research</a>
</strong>
</em>
</p>
<hr>
<h2>An anatomy of hype</h2>
<p>Fed by enthusiastic reporting, technologies tend to follow a pattern called the Gartner <a href="https://www.gartner.com/en/research/methodologies/gartner-hype-cycle">Hype Cycle</a>: first buoyed to an unsustainable “peak of inflated expectations” before falling to the “trough of disillusionment”. </p>
<p>The phenomenon can bring benefits to many players in the industry of science. So how can we fix the situation? </p>
<p>The exaggerated claims around particular stories tend to build upon one another in a snowball effect. This means all of those involved in creating and sharing the stories of science can step up: scientists, journals, universities and journalists. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-journalism-is-in-australias-interest-but-needs-support-to-thrive-79106">Science journalism is in Australia’s interest, but needs support to thrive</a>
</strong>
</em>
</p>
<hr>
<p>Salesmanship has become an indispensable skill for modern scientists – really, every grant application is a sales pitch. Indeed academic science as a whole seems to be tending toward ever more bluster. </p>
<p>In published papers the use of positive terms such as “innovative,” “unprecedented” and “groundbreaking” have <a href="https://www.bmj.com/content/351/bmj.h6467">increased by thousands of percent over the past four decades</a>. Scientists need to be wary of this trend and keep themselves in check when speaking to the media – especially in cases where their words will be taken very seriously by prospective patients and patient advocacy groups.</p>
<p>Journals and article reviewers can take responsibility for ensuring they publish top quality science, and also that the language in an article is accurate and not overblown. This includes the article title, which is sometimes the only part of an article journalists and general readers can see. </p>
<p>Language choices are also vital in materials coming out of university press offices. </p>
<p>Some reporters take press releases at face value, regurgitating lines or paragraphs verbatim. Non-specialist science reporters may not understand a field in enough detail to question this interpretation, or they don’t invest time in placing a new announcement in a broader context. Asking other experts for their views on a new piece of research is vital in science reporting. </p>
<h2>A risky symbiosis</h2>
<p>A symbiosis has evolved between scientists and the media: scientists need the media to bolster their record of exposure and “impact” on the next grant application. The media needs scientists for those shareable (and all too rare) positive, feelgood stories. </p>
<p>There is a stark mismatch between the elements required of a modern news story (novelty, impact), and the reality of medical research (slow, meticulous, often incremental). This can result in a <a href="https://access.portico.org/stable?au=phwwtrq8rt">distorted depiction of medical research</a>. </p>
<p>When these pressures push the story too far, they can end up spinning a fairytale. And with medical research in particular, fairytales can be dangerous.</p><img src="https://counter.theconversation.com/content/116987/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Cathal D. O'Connell is Centre Manager of BioFab3D, a 3D bioprinting lab based at St Vincent's Hospital Melbourne. He is a member of the Tissue Engineering and Regenerative Medicine International Society, the Australian Society for Biomaterials and Tissue Engineering and the International Society for Biofabrication.</span></em></p>
There is a stark mismatch between the elements required of a modern news story – unique, high impact – and the reality of medical research being slow, meticulous and progressing one step at a time.
Cathal D. O'Connell, VC Postdoctoral Fellow, RMIT University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/113856
2019-04-10T14:04:45Z
2019-04-10T14:04:45Z
How the open access model hurts academics in poorer countries
<figure><img src="https://images.theconversation.com/files/264835/original/file-20190320-93048-iv2dqr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Open access journals come with hidden costs.</span> <span class="attribution"><span class="source">rvlsoft/Shutterstock</span></span></figcaption></figure><p><a href="http://science.sciencemag.org/content/342/6154/58.full">The rise</a> of open access publishing should be applauded. Scientific research and literature should be made available to everyone, with no cost to the reader.</p>
<p>But there’s a catch: nothing is actually free and someone has to pay. The open access model merely changes who pays. So rather than individuals or institutions paying to have access to publications, increasingly, academics are expected to pay for publishing their research in these “open access” journals. In this way, publishers continue to make money even though they no longer charge readers to access their journals. </p>
<p>The bottom line is that payment has been transferred from institutions and individuals paying to have access to researchers having to pay to have their work published.</p>
<p>And these are substantial. For example, PlosOne charges academics US $1,595 per paper; PlosBiology charges US $3 000. Cell Reports charges US $5 000. Some journals call this cost a “publication fee”. Others refer to “article processing charges”. Ironically, the revenue received in this way is much higher than journal subscriptions – and yet the costs are minimal because the publications are digital with no hard copy costs and little administration. </p>
<p>The cost is usually borne by individual researchers in many institutions. This is a huge burden particularly in developing countries with weaker currencies. Some universities are able to cover part or all of the cost of open access articles, but some make no provision. Universities in most economies, particularly in the developing world, are under <a href="http://www.cedol.org/wp-content/uploads/2012/02/Steve-Maharey-article.pdf">huge financial pressure</a>.</p>
<p>An urgent discussion is needed around the cost of research publications. A more equitable system, in which the full costs and benefits are properly rewarded, is crucial. </p>
<h2>Rising costs</h2>
<p>There has been some debate about the rising cost of journal subscriptions and the University of California has recently “<a href="https://www.mercurynews.com/2019/03/04/demanding-open-access-uc-rebuffs-worlds-largest-publisher/">broken away</a>” from academic publisher Elsevier, stopping its subscriptions entirely.</p>
<p>There is however, little focus on the costs of open access to researchers in the developing world. Most people we have spoken to inside academia are under the impression that these costs are waived. But that’s only the case for some journals in 47 of the world’s “<a href="https://www.un.org/development/desa/dpad/wp-content/uploads/sites/45/publication/ldc_list.pdf">least developed” nations</a>; researchers in the 58 other countries in the developing world must pay the full price.</p>
<p>Currently, individual research programmes must bear the rising cost of open access publication. University researchers write grants for funding research and providing graduate students with scholarships. Few granting agencies take the cost of open access publication into account – and so publication costs eat into whatever precious grant funding researchers get.</p>
<p>In the <a href="https://fabinet.up.ac.za/index.php/people-profile?profile=908">research programme</a> I (Professor Wingfield) run, we’ve found that it is just too expensive to only publish in open access journals. Many of the articles in subscription journals are now made available online between six months and a year after publication. This time lag can be problematic in fast moving fields.</p>
<p>The cost of a PlosOne article is 20% of the cost of a Masters student’s scholarship. So the choice is “do I give a Masters student a scholarship, or publish more in open access journals?” We are trying to do both and we are sure that’s the approach many research programmes are trying to take. But as more journals take the open access route this is going to be more difficult. In future, if we want to publish more articles in open access journals, we will have to reduce the number of Masters, Doctoral and post doctoral students in our programmes.</p>
<p>This isn’t a problem that’s unique to our research groups or university. Colleagues in Europe and the US are also concerned about the cost of publishing in open access journals. But the problem is amplified in institutions located in developing economies.</p>
<h2>Finding solutions</h2>
<p>One of the solutions to this problem lies with publishing houses. Of course publishers want to make money. But if they’re serious about genuine open access and getting more authors from the developing world then some serious discussions are needed about reworking the current model. </p>
<p>One suggestion is to “flip” the current model, so there would only be open access and no subscription-only journals. This, however, may still be too expensive for many universities in the developing world who currently cannot afford journal subscriptions.</p>
<p>Some journals are already helping authors by offering incentives and rewards to reviewers. Editors approach experts in their fields to review manuscripts, this is the basis of peer review. These reviewers receive no remuneration for their input but are essential for the peer review process. In some cases, journals offer reviewers subscription access for a year. This only benefits the individual reviewer, not the organisation which pays their salaries. </p>
<p>This isn’t an ideal approach for universities. Perhaps publishers could consider a voucher approach in which vouchers accrue to the institution that pays the reviewer’s salary. These vouchers could contribute towards subscription costs or the article publication charges. More altruistic publishers could even donate vouchers to universities in the developing world.</p>
<p>The use of such vouchers would also have the potential of encouraging academics to undertake reviews. It’s becoming increasingly difficult to find reviewers for journal articles. Knowing that there is some benefit to their institutions would inspire more people to accept the work of reviewing. </p>
<p>Another possible solution is pressuring open access journals to waive charges for researchers in developing countries. Academics could also be encouraged to write first for journals that are affiliated to societies. Profits from these kinds of journals go back into supporting science through research grants, travel grants and meeting support. </p>
<p>And researchers must start incorporating publishing costs when applying for grants. Some major funders already encourage this, as does South Africa’s National Research Foundation <a href="https://www.nrf.ac.za/sites/default/files/documents/IFRR%20Framework%20and%20funding%20Guide%202018-%20Final%202Feb2018.pdf">in some cases</a>. Other granting agencies should be urged to do the same.</p><img src="https://counter.theconversation.com/content/113856/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brenda Wingfield receives funding from NRF (National Research Foundation) and DST (South African Dept of Science and Technology) as the DST/NRF SARChI (research chair) in Fungal Genomics. I am the vice president of Academy of Science of South Africa (ASSAf) and the Secretary General of the International Society of Plant Pathology.</span></em></p><p class="fine-print"><em><span>Bob Millar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
An urgent discussion is needed around the cost of research publications.
Brenda Wingfield, Vice President of the Academy of Science of South Africa and DST-NRF SARChI chair in Fungal Genomics, Professor in Genetics, University of Pretoria, University of Pretoria
Bob Millar, Professor and Director, Centre for Neuroendocrinology, University of Pretoria
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/105061
2018-10-24T14:33:27Z
2018-10-24T14:33:27Z
How to read and learn from scientific literature, even if you’re not an expert
<figure><img src="https://images.theconversation.com/files/241808/original/file-20181023-169801-c0kgls.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's hard work, but reading scientific literature can be very valuable.</span> <span class="attribution"><span class="source">Brendan Howard/Shutterstock</span></span></figcaption></figure><p>Reading scientific literature is not for the faint-hearted. It’s dense, and very often full of foreign terms and ideas. </p>
<p>It also assumes a basic understanding of the discipline in question. I can’t imagine that many people outside the world of theoretical physics are reading journal articles on the subject. That makes sense: <a href="https://www.nature.com/news/it-s-not-just-you-science-papers-are-getting-harder-to-read-1.21751">research has found</a> that scientific literature across disciplines is getting <a href="https://www.biorxiv.org/content/early/2017/03/28/119370">more complicated</a>.</p>
<p>But as more and more journals embrace the principles of <a href="https://doaj.org/">open access</a>, and more information becomes freely available online, curious readers are probably more likely to start engaging with scientific literature. That’s a good thing. Research shouldn’t be regarded as a closely kept secret for a small number of people. In a world full of half truths, simplistic and misleading summaries, and outright “fake news”, being able to read and engage with scientific literature can be a powerful weapon.</p>
<p>Of course, you can also seek out examples of scientists writing for the public. But be wary: not all scientists are willing to do this; we are, on the whole, very picky about details and don’t like generalisations. So try to engage with scientific literature where you can: it will be hard work in the beginning if you have no scientific background, but it’s a skill that can be developed.</p>
<p>So, if you’d like to start reading more scientific literature, here are a few tips to improve your experience. I’m focusing largely on the life sciences since that’s my area of expertise.</p>
<h2>Making sense of articles</h2>
<p>Science is about asking and answering questions. Scientific articles are the way in which scientists communicate their results to their peers. Here’s how to navigate those articles.</p>
<p><strong>Choose journals that publish good science:</strong></p>
<p>“Good science” is rigorous, verifiable and rooted in a broader body of research. There are however, an increasing number of scientific journals available. Some have better credentials than others; often, these are linked to reputable scientific societies. For instance, the South African Journal of Botany is the journal of the South African Association of Botanists. </p>
<p>Only people with a four-year degree who are active in the field can be members of the society. The same sort of rigour is applied to who can publish in the journal.</p>
<p>When journals aren’t linked to societies, you can look at their editors’ credentials. Reputable scientists are unlikely to allow their names to be linked to fraudulent or predatory (those that charge a fee to publish articles, without any review or editing) journals. </p>
<p>These are not reputable, and do not publish robust, good science. </p>
<p>Also, don’t be fooled by people’s titles. I would hope that no one would consult me about heart surgery but I sometimes see adverts where a “doctor” has endorsed a product – often one that has nothing to do with their field of expertise.</p>
<p><strong>Start with the abstract for a broad overview:</strong></p>
<p>It’s expensive to subscribe to most journals or to buy entire articles online. But even limited access journals usually supply the abstract for free. This summarises the article and usually gives the major findings. You can then decide whether you want more details and are prepared to buy the article, if it’s not open access, or to keep reading if it is.</p>
<p><strong>And then continue in a chronological fashion:</strong></p>
<p>Most articles have an introduction which introduces the topic and sets the scene. It usually includes a statement as to the aim of the study – essentially, the question that the authors set out to answer. It also provides references to previous literature, which could be useful to understanding the topic. There will references throughout the article; this is a way of ensuring that all statements are substantiated with reference to the published literature.</p>
<p>The next section of an article is usually followed by the materials and methods (although in some journals this might be relegated to the end of the article). Here, the authors will provide details about the methodology used in their experiments. This is where things can get very technical, but you will also see constant reference to other research that has been published using the same or similar methods if you want to get a better understanding of the methods. </p>
<p>Then comes the results section, which outlines the results yielded by the experiments. This, too, is likely to be very technical but is also where the details are provided. </p>
<p>The last section is the discussion, which provides the authors’ interpretation of the results. This is often what scientists read most carefully, since it’s where the authors “connect the dots”; they are also likely to provide a conclusion and suggest an answer to the question they were trying to answer. </p>
<h2>Read widely</h2>
<p>Once you’re finished reading one article on a topic, read some more. You should not ever just believe what is stated in a single article. Science is very repetitive and builds on the research that has come before, so researchers are often repeating others’ experiments. This is where the in text references come in handy. They provide a way for results from one laboratory to be checked and tested by others. So, to get the truth about a topic, read a number of articles about it.</p><img src="https://counter.theconversation.com/content/105061/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brenda Wingfield receives funding from NRF and DST. She works for the University of Pretoria and her research is done in FABI. She is vice president of ASSAf and holds a research chair in Fungal Genomics. </span></em></p>
Scientific articles are the way in which scientists communicate their results to their peers.
Brenda Wingfield, Vice President of the Academy of Science of South Africa and DST-NRF SARChI chair in Fungal Genomics, Professor of Genetics, University of Pretoria
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/99365
2018-07-12T20:02:44Z
2018-07-12T20:02:44Z
When to trust (and not to trust) peer reviewed science
<figure><img src="https://images.theconversation.com/files/227100/original/file-20180711-27021-ndukml.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Academic journals rely on peer review to support editors in making decisions about what to publish. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/two-scientists-lab-journal-discussing-results-104042246?src=l2YWk81Y4C4ESJbrfppTPg-1-0">from www.shutterstock.com </a></span></figcaption></figure><p><em>The article is part of our occasional long read series <a href="https://theconversation.com/au/topics/zoom-out-51632">Zoom Out</a>, where authors explore key ideas in science and technology in the broader context of society.</em></p>
<hr>
<p>The words “published in a peer reviewed journal” are sometimes considered as the gold standard in science. But any professional scientist will tell you that the fact an article has undergone peer review is a long way from an ironclad guarantee of quality.</p>
<p>To know what science you should <em>really</em> trust you need to weigh the subtle indicators that scientists consider. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-i-disagree-with-nobel-laureates-when-it-comes-to-career-advice-for-scientists-80079">Why I disagree with Nobel Laureates when it comes to career advice for scientists</a>
</strong>
</em>
</p>
<hr>
<h2>Journal reputation</h2>
<p>The standing of the journal in which a paper is published is the first thing. </p>
<p>For every scientific field, broad journals (like <a href="https://www.nature.com/nature/">Nature</a>, <a href="http://www.sciencemag.org/">Science</a> and <a href="http://www.pnas.org/">Proceedings of the National Academy of Science</a>) and many more specialist journals (like the <a href="http://www.jbc.org/">Journal of Biological Chemistry</a>) are available. But it is important to recognise that hierarchies exist. </p>
<p>Some journals are considered more prestigious, or frankly, better than others. The “<a href="https://researchguides.uic.edu/if/impact">impact factor</a>” (which reflects how many citations papers in the journal attract) is one simple, if controversial measure, of the importance of a journal. </p>
<p>In practice every researcher carries a mental list of the top relevant journals in her or his head. When choosing where to publish, each scientist makes their own judgement on how interesting and how reliable their new results are. </p>
<p>If authors aim too high with their target journal, then the editor will probably reject the paper at once on the basis of “interest” (before even considering scientific quality).</p>
<p>If an author aims too low, then they could be selling themselves short – this could represent a missed opportunity for a trophy paper in a top journal that everyone would recognise as significant (if only because of where it was published). </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-just-available-but-also-useful-we-must-keep-pushing-to-improve-open-access-to-research-86058">Not just available, but also useful: we must keep pushing to improve open access to research</a>
</strong>
</em>
</p>
<hr>
<p>Researchers sometimes talk their paper up in a cover letter to the editor, and aim for a journal one rank above where they expect the manuscript will eventually end up. If their paper is accepted they are happy. If not, they resubmit to a lower ranked, or in the standard euphemism, a “more specialised journal”. This wastes time and effort, but is the reality of life in science.</p>
<p>Neither editors nor authors like to get things wrong. They are weighing up the pressure to break a story with a big headline against the fear of making a mistake. A mistake in this context means publishing a <a href="http://science.sciencemag.org/content/273/5277/924">result</a> that becomes quickly embroiled in <a href="https://www.space.com/33690-allen-hills-mars-meteorite-alien-life-20-years.html">controversy</a>. </p>
<p>To safeguard against that, three or four peer reviewers (experienced experts in the field) are appointed by the editor to help.</p>
<h2>The peer review process</h2>
<p>At the time of submitting a paper, the authors may suggest reviewers they believe are appropriately qualified. But the editor will make the final choice, based on their understanding of the field and also on how well and how quickly reviewers respond to the task.</p>
<p>The identity of peer reviewers is usually kept secret so that they can comment freely (but sometimes this means they are quite harsh). The peer reviewers will repeat the job of the editor, and advise on whether the paper is of sufficient interest for the journal. Importantly, they will also evaluate the robustness of the science and whether the conclusions are supported by the evidence.</p>
<p>This is the critical “peer review” step. In practice, though, the level of scrutiny remains connected to the standing of the journal. If the work is being considered for a top journal, the scrutiny will be intense. The top journals seldom accept papers unless they consider them to be not only interesting but also water tight and bullet proof – that is they believe the result is something that will stand the test of time.</p>
<p>If, on the other hand, the work is going into a little-read journal with a low impact factor, then sometimes reviewers will be more forgiving. They will still expect scientific rigour but are likely to accept some data as inconclusive, provided the researchers point out the limitations of their work. </p>
<p>Knowing this is how the process goes, whenever a researcher reads a paper they make a mental note of where the work was published. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-was-missing-in-australias-1-9-billion-infrastructure-announcement-96723">What was missing in Australia's $1.9 billion infrastructure announcement</a>
</strong>
</em>
</p>
<hr>
<h2>Journal impact factor</h2>
<p>Most journals are reliable. But at the bottom of the list in terms of impact lie two types of journals: </p>
<ol>
<li><p>respectable journals that publish peer reviewed results that are solid but of limited interest – since they may represent dead ends or very specialist local topics</p></li>
<li><p>so-called “predatory” journals, which are more sinister – in these journals the peer review process is either superficial or non-existent, and editors essentially charge authors for the privilege of publishing.</p></li>
</ol>
<p>Professional scientists will distinguish between the two partly based on the publishing house, and even the name of the journal. </p>
<p>The Public Library of Science (<a href="https://www.plos.org/">PLOS</a>) is a reputable publisher, and offers <a href="http://journals.plos.org/plosone/">PLOS ONE</a> for solid science – even if it may only appeal to a limited audience. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/universities-spend-millions-on-accessing-results-of-publicly-funded-research-88392">Universities spend millions on accessing results of publicly funded research</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.springernature.com/gp/">Springer Nature</a> has launched a similar journal called <a href="https://www.nature.com/srep/">Scientific Reports</a>. Other good quality journals with lower impact factors include journals of specialist academic societies in countries with smaller populations – they will never reach a large audience but the work may be rock solid. </p>
<p>Predatory journals on the other hand are often broad in scale, published by online publishers managing many titles, and sometimes have the word “international” in the title. They are seeking to harvest large numbers of papers to maximise profits. So names like “The International Journal of Science” should be treated with caution, whereas the “Journal of the Australian Bee Society” may well be reliable (note, I invented these names just to illustrate the point). </p>
<h2>The value of a journal vs a single paper</h2>
<p>Impact factors have become controversial because they have been overused as a proxy for the quality of single papers. However, strictly applied they reflect only the interest a journal attracts, and may depend on a few “jackpot” papers that “go viral” in terms of accumulating citations. </p>
<p>Additionally, while papers in higher impact journals may have undergone more scrutiny, there is more pressure on the editors and on the authors of these top journals. This means shortcuts may be taken more often, the last, crucial control experiment may never be done, and the journals end up being less reliable than their reputations imply. This disconnect sometimes generates sniping about how certain journals aren’t as good as they claim to be – which actually keeps everyone on their toes.</p>
<p>While all the controversies surrounding impact factors are real, every researcher knows and thinks about them or other journal ranking systems (SNP – Source Normalised Impact per Paper, SJR – Scientific Journal Rankings, and others) when they are choosing which journal to publish in, which papers to read, and which papers to trust. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-isnt-broken-but-we-can-do-better-heres-how-95139">Science isn't broken, but we can do better: here's how</a>
</strong>
</em>
</p>
<hr>
<h2>Nothing is perfect</h2>
<p>Even if everything is done properly, peer review is not infallible. If authors fake their data very cleverly, for example, then it may be difficult to detect. </p>
<p>Deliberately faking data is, however, relatively rare. Not because scientists are saints but because it is foolish to fake data. If the results are important, others will quickly try to reproduce and build upon them. If a fake result is published in a top journal it is almost certain to be discovered. This does happen from time to time, and it is always a <a href="https://www.nature.com/news/stem-cell-scientist-found-guilty-of-misconduct-1.14974">scandal</a>.</p>
<p>Errors and sloppiness are much more common. This may be related to the increasing urgency, pressure to publish and prevalence of large teams where no one may understand all the science. Again, however, only inconsequential mistakes will survive – most important <a href="https://www.nature.com/news/neutrinos-not-faster-than-light-1.10249">errors</a> will quickly be picked up.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-just-available-but-also-useful-we-must-keep-pushing-to-improve-open-access-to-research-86058">Not just available, but also useful: we must keep pushing to improve open access to research</a>
</strong>
</em>
</p>
<hr>
<h2>Can you trust the edifice that is modern science?</h2>
<p>Usually, one can get a feel for how likely it is that a piece of peer reviewed science is solid. This comes through relying on the combination of the pride and the reputation of the authors, and of the journal editors, and of the peer reviewers. </p>
<p>So I do trust the combination of peer review system and the inherent fact that science is built on previous foundations. If those are shaky, the cracks will appear quickly and things will be set straight.</p>
<p>I am also heartened by <a href="https://theconversation.com/peer-review-has-some-problems-but-the-science-community-is-working-on-it-99596">new opportunities</a> for even better and faster systems that are arising as a result of advances in information technology. These include models for post-publication (rather than pre-publication) peer review. Perhaps this creates a way to formalise discussions that would otherwise happen on Twitter, and that can raise doubts about the validity of published results.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bored-reading-science-lets-change-how-scientists-write-81688">Bored reading science? Let's change how scientists write</a>
</strong>
</em>
</p>
<hr>
<p>The journal <a href="https://elifesciences.org/">eLife</a> is turning peer review on its head. It’s offering to publish everything it deems to be of sufficient interest, and then letting authors choose to answer or not answer points that are raised in peer review after acceptance of the manuscript. Authors can even choose to refrain from going ahead if they think the peer reivewers’ points expose the work as flawed. </p>
<p>ELife also has a system where reviewers get together and provide a single moderated review, to which their names are appended and which is published. This prevents the problem of anonymity enabling overly harsh treatment. </p>
<p>All in all, we should feel confident that important science is solid (and peripheral science unvalidated) due to peer review, transparency, scrutiny and reproduction of results in science publication. Nevertheless in some fields where reproduction is rare or impossible – long term studies depending on complex statistical data – it is likely that scientific debate will continue. </p>
<p>But even in these fields, the endless scrutiny by other researchers, together with the proudly guarded reputations of authors and journals, means that even if it will never be perfect, the scientific method remains more reliable than all the others.</p><img src="https://counter.theconversation.com/content/99365/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Merlin Crossley receives funding from the Australian Research Council and the National Health and Medical Research Council. He works at UNSW Sydney, and is on the Trust of the Australian Museum, the Boards of the Australian Science Media Centre, UNSW Press and UNSW Global. </span></em></p>
There’s peer review – and then there’s peer review. With more knowledge you can dive in a little deeper and make a call about how reliable a science paper really is.
Merlin Crossley, Deputy Vice-Chancellor Academic and Professor of Molecular Biology, UNSW Sydney
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/99596
2018-07-12T20:02:19Z
2018-07-12T20:02:19Z
Peer review has some problems – but the science community is working on it
<figure><img src="https://images.theconversation.com/files/227046/original/file-20180711-70042-1167u0r.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Peer review takes time – around seven to eight hours per paper if done properly. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hand-holding-red-pen-over-proofreading-635366888?src=iOMmrcGE3zZ9Pay2ILJfLQ-1-1">from www.shutterstock.com </a></span></figcaption></figure><p>Peer review is the central foundation of science. It’s a process where scientific results are vetted by academic peers, with publication in a reputable journal qualifying the merits of the work and informing readers of the latest scientific discoveries. </p>
<p>But peer review sometimes gets a <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/">bad rap</a> – criticised for a purported lack of transparency, low accountability and even poor <a href="http://science.sciencemag.org/content/342/6154/60.full">scientific rigour</a>. </p>
<p>There’s now considerable movement towards tweaking or even remodelling the peer review system. Key areas of focus include making journal editors more directive in the process, rewarding reviewers, and improving accountability of editors, reviewers and authors. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-isnt-broken-but-we-can-do-better-heres-how-95139">Science isn't broken, but we can do better: here's how</a>
</strong>
</em>
</p>
<hr>
<h2>Peer review relies on volunteers</h2>
<p>The peers in the <a href="https://authorservices.wiley.com/Reviewers/journal-reviewers/what-is-peer-review/the-peer-review-process.html">peer review system</a> are volunteer academics with expertise relevant to the paper being considered. But it’s hard to find suitable volunteers. </p>
<p>Reviewing is more complex and onerous than just rejecting or accepting a manuscript. More often than not, a reviewer suggests additional experiments that authors have overlooked, or challenges the interpretation of some of the data. This initiates a dialogue between author and reviewer aimed at improving the integrity and scientific merit of the paper.</p>
<p>It takes time – at least seven to eight hours per paper done properly, with no remuneration or recognition for the reviewer and hence rarely regarded as a priority in a busy academic schedule. As a result, scientific rigour can be lost when reviews become fast-tracked. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-just-available-but-also-useful-we-must-keep-pushing-to-improve-open-access-to-research-86058">Not just available, but also useful: we must keep pushing to improve open access to research</a>
</strong>
</em>
</p>
<hr>
<p>On the other extreme, sometimes novice reviewers (perhaps trying to impress the editor) can turn small discrepancies into significant flaws. This presents a breakdown in the fairness of the review process. </p>
<p>Overall, these issues create a <a href="https://www.theguardian.com/higher-education-network/blog/2013/oct/23/peer-review-system-science-research">limited number of peer reviewers</a> in practise, an outcome that can lead to cronyism. </p>
<p>Delving further into this remaining inadequate pool of reviewers, a significant gender gap is also apparent. <a href="https://t.co/JBaEOMQzYu">Nature</a> reports that less than 20% of its reviewers in 2017 were women. </p>
<h2>More accountability from editors</h2>
<p>We should demand more of our journal editors. </p>
<p>Editors can become more proactive by rejecting articles that are not at publication standard upon submission, rather than placing the arduous task on a reviewer to be both scientist and copy checker. </p>
<p>To retain and train novice reviewers, clearer evaluation criteria from editors would vastly improve the reliability and quality of submitted papers. </p>
<p>Editors could also engage better in a dynamic dialogue between author and reviewer – digital communication technologies enable real-time global discussions to facilitate streamlined review processes for all involved. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bored-reading-science-lets-change-how-scientists-write-81688">Bored reading science? Let's change how scientists write</a>
</strong>
</em>
</p>
<hr>
<h2>Recognition of reviewers</h2>
<p>Traditionally, editors are held up to be a revered part of the peer review process, and reviewers are simply not acknowledged for their contributions. But this is changing. </p>
<p>To promote increased transparency, greater accountability and fairness, open peer review processes list reviewers and editors in addition to authors in each publication. </p>
<p>This is happening now in newly established online journals such as <a href="https://elifesciences.org/">eLife</a>. Independent platform Publons <a href="https://publons.com/benefits/researchers">rewards reviewers</a> by listing all peer reviewing and editorial activity to provide evidence of a reviewer’s expert contributions in their field. Publons also runs a <a href="https://publons.com/community/awards/peer-review-awards-2018">reviewer awards program</a>. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/demasi-cleared-but-images-in-science-continue-to-attract-intense-scrutiny-84772">Demasi cleared, but images in science continue to attract intense scrutiny</a>
</strong>
</em>
</p>
<hr>
<p>Similarly, Elsevier has started a “<a href="https://www.elsevier.com/en-au/reviewers/becoming-a-reviewer-how-and-why#recognizing">reviewer recognition programme</a>”, extending various rewards and publishing a yearly list acknowledging the contributions of all the reviewers.</p>
<p>This process has been met with <a href="https://www.molecularecologist.com/2016/05/the-fourth-reviewer-open-review/">criticism</a> by some who insist anonymity guarantees unbiased opinions.</p>
<h2>Post- vs pre-publication peer review</h2>
<p>It’s now becoming clear that scientific dialogue does not need to stop at the endpoint of publication, and that not all problems within a manuscript may be identified at the time of peer review.</p>
<p><a href="https://blog.f1000.com/2014/07/08/what-is-post-publication-peer-review/">Post-publication peer review</a> in its most validated form, involves a journal such as <a href="https://www.frontiersin.org/about/review-system?utm_source=Home&utm_medium=Carousel&utm_campaign=review-about">Frontiers</a> asking academics to perform a published interactive dialogue with authors during the review process, giving a level of accountability and responsibility. </p>
<p>Other journals such as <a href="https://f1000.com/prime/about/whatis">Faculty of 1000 Research</a>, <a href="https://publications.copernicus.org/services/public_peer_review.html">Copernicus</a> and <a href="http://journals.plos.org/plosone/">PLOS ONE</a> publish papers with minimal evaluation. This shifts the focus towards post-publication peer review – authors, reviewers and readers critique and comment on the paper to judge its scientific merit in the public domain. </p>
<p>Alternative post-publication peer review platforms such as <a href="http://about.scienceopen.com/what-is-post-publication-peer-review/">ScienceOpen</a> invite all scientists registered with digital identifier <a href="https://orcid.org/">ORCID</a> to write a review or comment on <a href="http://www.apastyle.org/learn/faqs/what-is-doi.aspx">DOI</a>-linked papers. This facilitates engagement of a large cross-section of the scientific community for dynamic appraisal of a publication’s scientific merit. </p>
<p>Forums such as <a href="https://pubpeer.com/">Pubpeer</a> invite anonymous commentary from anyone in the scientific or general community. This occurs without moderation, openly facilitating the possibility of trolling and abusive behaviour at times culminating in <a href="https://retractionwatch.com/2018/04/05/caught-our-notice-researcher-who-sued-pubpeer-commenter-up-to-21-retractions/">legal action</a>.</p>
<h2>Time to try something new?</h2>
<p>Peer review is not ready to be retired – but it is primed to change. </p>
<p>A recent <a href="https://elifesciences.org/articles/36545">trial by eLife</a> intends to radically transform the roles of editor, reviewer and author. According to this model, if a senior editor deems a publication worthy of going to review, this paper immediately qualifies for publication. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/weakened-code-risks-australias-reputation-for-research-integrity-98622">Weakened code risks Australia’s reputation for research integrity</a>
</strong>
</em>
</p>
<hr>
<p>Once under review, an open dialogue between author and reviewers takes place. Upon receipt of reviewers’ recommendations, the authors can decide to continue experiments if advised, retract the paper or publish it. This leaves the author’s decision to the scrutiny of the general scientific community. </p>
<p>This innovation may greatly improve the transparency of open peer review, increase accountability on behalf of all participants and reduce burden on the peer review system. It addresses the three major strategies required for improvement of the peer review system. But is it a step too far, too soon? Time will tell. </p>
<p>The overall goal of <a href="https://www.nature.com/nature/peerreview/debate/index.html">debates around peer review</a> and appearance of new publication platforms and approaches is to create a united front of authors, reviewers and editors to uphold scientific integrity. </p>
<p>This is vital not just within academic circles, but also to maintain the reputation of science in the broader community.</p><img src="https://counter.theconversation.com/content/99596/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jessica Borger does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Key areas of focus for tweaking peer review include making journal editors more directive in the process, rewarding reviewers, and improving accountability of editors, reviewers and authors.
Jessica Borger, Postdoctoral Research Fellow in Immunology, Monash University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/86058
2017-10-27T06:01:34Z
2017-10-27T06:01:34Z
Not just available, but also useful: we must keep pushing to improve open access to research
<figure><img src="https://images.theconversation.com/files/191936/original/file-20171026-28083-18v0f4x.jpg?ixlib=rb-1.1.0&rect=71%2C83%2C4262%2C4297&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">There is a huge appetite for science and other research - so why aren't more academic publications truly 'open access'? </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/hand-opens-room-door-book-library-434135344?src=keBI2ko-ELJY6YgFZq0TGQ-1-98">from www.shutterstock.com </a></span></figcaption></figure><p>Stephen Hawking’s PhD thesis became freely available online this week, and promptly <a href="https://www.theguardian.com/science/2017/oct/23/stephen-hawkings-expanding-universes-thesis-breaks-the-internet">crashed a server</a> following massive public interest. </p>
<p>It’s a clear example of the public appetite for open access scientific information, and of the potential reach when articles are available. </p>
<p>But most of the world’s academic literature is still only legally available behind a paywall. </p>
<p>It’s time we brought the idea of open access publication truly to fruition: not just so more people can read research, but also to improve the application of academic work to address issues such as health inequity and poverty. </p>
<hr>
<p><em><strong>Read more:</strong> <a href="https://theconversation.com/how-the-insights-of-the-large-hadron-collider-are-being-made-open-to-everyone-70283">How the insights of the Large Hadron Collider are being made open to everyone</a></em> </p>
<hr>
<h2>A brief history of open access</h2>
<p>The example of Hawking’s thesis backs up what we know already from the numbers: work that is freely available is at least <a href="http://blogs.nature.com/ofschemesandmemes/2015/10/21/open-access-is-thriving-at-nature-publishing-group">two to three times more likely to be read</a> than closed access articles, and is <a href="http://onlinelibrary.wiley.com/doi/10.1002/asi.23687/full">47% more likely to be cited</a> in Wikipedia. </p>
<p>Defined simply, “open access” publications refer to work that is freely available, licensed in way that allows broad use and reuse, and which is permanently archived in a public repository or open publishing platform. </p>
<p>This week marks the tenth anniversary of <a href="http://www.openaccessweek.org/">Open Access Week</a>, and 15 years since the <a href="http://www.budapestopenaccessinitiative.org/">Budapest Open Access initiative</a> (one of the first definitions of open access) was launched. </p>
<p>In the past 15 years open access has morphed somewhat. It started as a fairly niche proposal, with small numbers of open access publishers in operation and <em>ad hoc</em> networks of repositories. Now it’s a truly global movement. Thousands of open access articles are freely available either through publishing in <a href="https://doaj.org/">open access journals</a>, or via the many open <a href="https://www.coar-repositories.org/">institutional repositories</a> globally. </p>
<p>There were <a href="http://www.caul.edu.au/caul-programs/caul-statistics/statistics-summary-current">32.4 million accesses</a> of content from Australian academic research repositories in 2014. But compared with the pace of change in the online newspaper or music industry, the adoption of open access to academic research is slow. Why? A few factors come into play. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/your-questions-answered-on-open-access-49284">Your Questions Answered on open access</a>
</strong>
</em>
</p>
<hr>
<h2>Commercial agendas</h2>
<p>One fundamental problem is that universities pay vast sums of money to support academic publishing through subscriptions. Most of this goes into the bank accounts of a <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127502">small number of commercial publishers</a> who have a powerful interest in not supporting wholesale change in business models.</p>
<p>This handful of publishers has been systematically <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127502">buying up</a> smaller publishers and journals, creating an oligopoly. They have now moved into ways of collecting revenues for open access via article processing charges (<a href="http://blogs.egu.eu/network/palaeoblog/files/2015/02/OpenGlossary1.pdf">APCs</a>) - payment for publication once a research paper has passed peer review. </p>
<p>APCs are levied by many, but not all, open access publishers (including not-for-profit ones). However, the highest APCs are seen at commercial publishers especially in their journals that are not fully open access – so-called <a href="https://wellcome.ac.uk/funding/managing-grant/wellcome-and-coaf-open-access-spend-2015-16">hybrid journals</a>, where costs can be up to US$5,000 per article.</p>
<h2>Different definitions</h2>
<p>Different <a href="https://peerj.com/preprints/3119/">descriptors of open access</a> can be confusing. </p>
<p>Research made open in a journal is referred to as “gold”, and in an institutional repository it is “green”. Work made open illegally has been called “black”. But open access is also often used as a synonym for just free access of a static version of a paper PDF, with no right to reuse. </p>
<p>True open access takes the form of a fully digitally interoperable article, electronically marked with rich metadata that indicates who wrote it, and with a licence that allows use and reuse. Such papers can be used in teaching, included seamlessly in other academic work, and much more – all with clear attribution and credit to the original author.</p>
<p>These ideas have been consolidated into the “F.A.I.R.” principles to describe research that is findable, accessible, interoperable and reusable. A <a href="https://www.fair-access.net.au/">statement</a> laying out these principles for Australian research was developed last year.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bored-reading-science-lets-change-how-scientists-write-81688">Bored reading science? Let's change how scientists write</a>
</strong>
</em>
</p>
<hr>
<h2>Leadership matters</h2>
<p>The importance of strong leadership in pushing the case for open access is evident in the Netherlands. Dutch science minister Sander Dekker <a href="https://www.government.nl/documents/publications/2016/05/26/opiniestuk-van-staatssecretaris-dekker-over-open-access">has taken on open access as a cause</a>, which has resulted in a <a href="https://www.openscience.nl/en/open-science">national plan for open science</a>.</p>
<p>In Australia, open access policies are predominantly repository-based at the two big funders: the <a href="http://www.arc.gov.au/">Australian Research Council</a> and the <a href="https://www.nhmrc.gov.au/">National Health and Medical Research Council</a>. </p>
<p>An overall national position would be immensely valuable. Positive first steps were made towards this when the Productivity Commission made a recommendation for national and states open access policies in the 2016 <a href="https://www.pc.gov.au/inquiries/completed/intellectual-property">Inquiry into Intellectual Policy Arrangements</a>. In August 2017 the federal government <a href="https://www.industry.gov.au/innovation/Intellectual-Property/Documents/Government-Response-to-PC-Inquiry-into-IP.pdf">accepted this recommendation</a>.</p>
<h2>A vision and a pathway</h2>
<p>Even if all the above issues were dealt with, open access will continue to advance piecemeal, unless we have a long-term clarification of what we’re aiming for and how to get there. </p>
<p>It’s important to note that increasing open access is <a href="https://openinorder.to/">not the end goal in itself</a>. We need open access in order to fulfil other urgent priorities such as maximising collaboration, improving global health, and reducing poverty, and this is the theme of this year’s open access week.</p>
<p>It’s worth noting here that caution may be needed around open access to sensitive data, for example relating to patients or <a href="https://theconversation.com/publish-and-dont-perish-how-to-keep-rare-species-data-away-from-poachers-80239">threatened species</a>, or some commercial work. </p>
<p>An effective open access scholarly ecosystem requires a collaborative, long-term commitment to policies and infrastructure by key players. Examples of how this can take place were detailed this week by a <a href="https://about.hindawi.com/opinion/a-radically-open-approach-to-developing-infrastructure-for-open-science/">publisher</a> and COAR, the global <a href="https://www.coar-repositories.org/news-media/beyond-open-access-five-prerequisites-for-a-sustainable-knowledge-commons/">repository association</a>. </p>
<p>In 2017 it’s high time to look beyond narrow definitions of open access. Let’s focus on infrastructure planning and building for the next decade, where research outputs are available not just for reading, but also for effective application.</p><img src="https://counter.theconversation.com/content/86058/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Virginia Barbour is the Director of the Australasian Open Access Strategy Group, which advocates for open access. </span></em></p>
Could the real open access please stand up? If more research was published according to true open access principles, we’d see better application of evidence for everyone’s benefit.
Virginia Barbour, Director, Australasian Open Access Strategy Group, Queensland University of Technology
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/85186
2017-10-09T17:25:22Z
2017-10-09T17:25:22Z
Ancient DNA unearths fascinating secrets. But what about the ethics?
<figure><img src="https://images.theconversation.com/files/189296/original/file-20171008-25749-1f3e745.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Research of ancient DNA has tended to ignore previous studies about the bones themselves.</span> <span class="attribution"><span class="source">Reuters/Siphiwe Sibeko</span></span></figcaption></figure><p>Ancient DNA is starting to reveal the secrets of how people emerged from, moved into or moved around Africa. Human skeletons from <a href="https://academic.oup.com/gbe/article-lookup/doi/10.1093/gbe/evu202">Saldanha Bay</a> in South Africa’s Western Cape province, <a href="http://science.sciencemag.org/content/early/2017/09/27/science.aao6266.long">Ballito Bay</a> in its KwaZulu-Natal province and Mota Cave in <a href="http://science.sciencemag.org/content/350/6262/820">Ethiopia</a>; Tanzania and now <a href="http://www.sciencedirect.com/science/article/pii/S0092867417310085?via%3Dihub">Malawi</a> have been analysed and the results recently published.</p>
<p>I am part of a team working on a whole series of skeletons from the Later Stone site of Faraoskop in the Western Cape. We are trying to find both <a href="https://ghr.nlm.nih.gov/primer/basics/mtdna">mitochondrial</a> and nuclear DNA samples to work out relationships between individuals in what may have been a case of mass killing some 2000 years ago.</p>
<p>This rush of projects has presented the curators of archaeological skeletons with ethical issues because the research requires the destruction of human bone. </p>
<p>There are four central problems that concern me and that have been echoed in my private correspondence with various colleagues: competition between labs for samples; the danger of parachute research (foreign researchers who drop in, gather data and go home again); the disconnection between the study of bones and genetics; and laboratory methodologies and comparative data.</p>
<h2>The challenges</h2>
<p><strong>Competition for samples:</strong> This has become a very real problem. At least five labs have been processing archaeological skeletons from South Africa. Back in May 2014, I made a list of all ancient DNA projects on South African specimens that had, up to that point, been proposed or were in action. I counted 13, though not all of these have taken place. </p>
<p>In some cases permission to sample has been refused. One reason for refusal is that the project is simply an attempt to analyse skeletons because they are old and available. This sort of analysis may be good for the laboratory concerned, but it is just plain bad science and is perilously close to “mining” of bone specimens from museums. </p>
<p>Much of the competition for samples is publication driven with labs chasing the next paper in Nature or other high-impact journals. This is obviously important as it can drive funding for labs or promotions for their denizens. </p>
<p><strong>Parachute research:</strong> It’s very easy to do sampling in this kind of research. All that’s required is a nubbin of bone, and in most cases that is sent out of the country for analysis to happen elsewhere. So how should South African researchers fit in? </p>
<p>For a number of years there was an active resistance to setting up a South African lab in the belief that it was too expensive and funding would be better spent on projects that have a more direct benefit to the country’s previously disadvantaged people. That attitude is now changing in some quarters and I have heard talk of setting up labs in Cape Town, Johannesburg and Pretoria. How would such labs link to overseas institutions? </p>
<p><strong>Genes versus bones:</strong> There has been a definite tendency for genetic research to ignore information gathered in <a href="https://doi.org/10.1080/00359190509520487">previous studies</a> of the bones of the skeletons themselves.</p>
<p>Ancient DNA research, like genetic research on living peoples, has been focused on tracing back lineage lines through mitochondrial, Y-chromosome or, very recently, nuclear DNA. All that has been required is a tiny fragment of bone that can yield DNA. But can such studies give us a true picture of the past? </p>
<p>The answer is “yes” in terms of lineage, but “no” in terms of life experience and adaptation. This issue is important because the first choice in sampling should be from as complete a skeleton as possible so that genetic and osteological data – that is, information about bones – can be compared. </p>
<p>Perhaps the most extreme example of this problem is the construction of the human ancestor known as the “<a href="https://genographic.nationalgeographic.com/denisovan/">Denisovans</a>”. Much has been written about these distant ancestors’ genetics. But all of it has been based on one finger bone and three teeth from one site. We actually know nothing about these people except for their genetic shadow. The forensic anthropologist in me screams that I must have a body before making any conclusions. The same goes for the discussion of people from the comparatively recent African past. </p>
<p><strong>Comparative data:</strong> As a non-geneticist, it did not cross my mind that different labs might produce different DNA results. Some years ago I had my own Y-chromosome and mitochondrial DNA analysed. The results were fascinating, but I was extremely surprised to discover that if I sent the same samples to different DNA heritage laboratories I could get different results. </p>
<p>It is not the analysis itself that is different, but the reference samples that are chosen for comparison. This can be resolved as the analysed samples become more numerous – as long as the different labs share their results – but I have recently discovered that not all labs are the same when it comes to piecing together long strands of nuclear DNA from the fragments discovered in the process of extracting ancient DNA from bones. </p>
<p>The processing methods are not interchangeable and there are at least two different methodologies that produce different success rates. This means it is possible that results from different labs may not be comparable. This would make the competition between labs even more intense and might even result in multiple requests for samples from the same skeleton. </p>
<h2>Knowledge about African heritage</h2>
<p>Ultimately, the most important issue is that African scientists need to be part of the research and African descendant communities need to be able to access the information discovered about their ancestors. </p>
<p>We need to ensure that both training and jobs in ancient DNA research are available in African countries and that publications are submitted to local scientific and museum journals. This research is not about the next promotion for the lab scientists. It is about building the knowledge base of our African heritage. </p>
<p><em>This article has been adapted from <a href="http://www.sajs.co.za/system/tdf/publications/pdf/SAJS-113-9-10_Morris_NewsViews.pdf?file=1&type=node&id=35847&force=">a piece</a> which originally appeared in the South African Journal of Science.</em></p><img src="https://counter.theconversation.com/content/85186/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alan G Morris receives funding from South African NRF.</span></em></p>
A rush of ancient DNA projects in Africa has presented the curators of archaeological skeletons with ethical issues because research requires the destruction of human bone.
Alan G Morris, Professor of Biological Anthropology, University of Cape Town
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/85161
2017-10-04T00:42:09Z
2017-10-04T00:42:09Z
How fair is it for just three people to receive the Nobel Prize in physics?
<figure><img src="https://images.theconversation.com/files/188682/original/file-20171003-18916-171bnxd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Alfred Nobel didn't foresee the current era of mega scientific collaboration.</span> <span class="attribution"><a class="source" href="https://www.nobelprize.org/press/#/image-details/584fbf368409c20d00efa01f/552bd85dccc8e20c00e7f979?sh=false">© Nobel Media AB Pi Frisk</a></span></figcaption></figure><p>The Nobel Foundation statutes decree that “<a href="https://www.nobelprize.org/nobel_prizes/facts/">in no case</a>” can a Nobel Prize be divided between more than three people. So it may not raise many eyebrows that the 2017 award in physics went to <a href="https://www.nobelprize.org/nobel_prizes/physics/laureates/2017/press.html">just three scientists on the LIGO team</a> for their “decisive contributions to the LIGO detector and the observation of gravitational waves.”</p>
<p>But <a href="https://doi.org/10.1038/497557a">science is increasingly collaborative</a> across teams (including scientists and engineers), across nations and across disciplines. The majority of all scientific articles <a href="https://doi.org/10.1126/science.1136099">are co-authored</a>. Of these, over 25 percent are <a href="https://doi.org/10.1371/journal.pone.0131816">internationally co-authored</a>. LIGO – more than most projects – represents these trends. One of the group’s most important papers involves <a href="https://doi.org/10.1103/PhysRevLett.116.061102">355 co-authors from at least 20 countries</a>.</p>
<p>So with cutting-edge science being carried out in large international collaborations, who actually winds up on the rostrum in Stockholm? As a student of science dynamics, I have tracked how and why scientists link up with one another, in what fields, and how it improves the outcomes. These allegiances have an impact on who receives an award like a Nobel Prize, since international collaborations are more highly cited than national or sole-authored work. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/188685/original/file-20171003-739-ejs8rt.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A LIGO optics technician who is not a recipient of the Nobel Prize.</span>
<span class="attribution"><a class="source" href="https://www.ligo.caltech.edu/image/ligo20151214">Matt Heintze/Caltech/MIT/LIGO Lab</a>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<h2>Shifting norms around collaboration and credit</h2>
<p>Scientific discoveries these days typically rely on advances in the underlying technology and equipment used in experimentation. To enable breakthroughs, LIGO, CERN, the Human Genome Project and others rely on new technologies, which in turn are built often by large international teams. And within science, it’s becoming standard to more broadly recognize contributions like these than in the past. </p>
<p>This is a shift in social behavior, since scientists have always had collaborators and helpers – they just didn’t grant them a place on the “author” list. Now, there is a greater tendency to list the technical people who make discoveries possible. At CERN, for example, new discoveries, <a href="https://doi.org/10.1103/PhysRevLett.114.191803">such as the Higgs Boson</a>, are claimed in articles that list engineers and computer scientists as well as the theorists who develop the experiments.</p>
<p>And the fact that the Nobel Prize is offered specifically for physics is out of step with the tendency for interdisciplinary contributions to be fundamental to breakthroughs. A quick glance at the list of <a href="https://doi.org/10.1103/PhysRevD.93.042006">contributing institutions for LIGO</a> shows collaborators from a school of mathematics, space science, departments of informatics, as well as cosmologists, astrophysics observatories, supercomputing centers and many others.</p>
<p>While practitioners have expanded the way contributions are credited, awards like the Nobel Prizes haven’t caught up. The little bit of science history taught in school still focuses on individual contributors such as Marie Curie and Albert Einstein. Harder to explain or visualize are the cross-disciplinary collaborations that constitute most of science today.</p>
<h2>The rich get richer</h2>
<p>In a <a href="https://doi.org/10.1371/journal.pone.0134164">study I conducted with the Nobel Library in Sweden</a>, we compared Nobel Prize winners in physiology or medicine to a matched group of scientists to examine productivity, impact, coauthorship and international collaboration patterns. The laureate’s co-author network reveals significant differences from the non-laureate network. Laureates are more likely to build bridges across a network by reaching out to a non-obvious collaborator, such as <a href="https://www.nobelprize.org/nobel_prizes/physics/laureates/2000/">physicist Jack Kilby</a> working with a materials scientist to develop new materials for microprocessors. They were more likely to exploit “structural holes” – gaps between fields that offer enticing but unrealized possibilities. </p>
<p>This process builds their reputation within as well as across scientific fields. (For example, both physicists and materials scientists read Kilby’s paper.) In science, reputation is the coin of the realm. It’s gained through cooperation as well as attention to the outputs of science – <a href="http://www.jstor.org/stable/2091085">the journal article</a>.</p>
<p>When publishing any scientific article, there is a basic conundrum – someone must receive the prime place on the list of authors. In some fields, authors covet the first place; in others, the last place. And the benefits of being the primary author go far beyond a single article. There’s a phenomenon called the <a href="https://doi.org/10.1126/science.159.3810.56">“Matthew Effect” in science</a>, referring to the observation in the Gospel of Matthew that the “rich get richer.” The noted author of an article is much more likely to receive attention into the future.</p>
<p>Creative networkers like Jack Kilby grow their network in several fields as a result of their work, enhancing citations and reputation.</p>
<p>Searchable databases such as Google Scholar accentuate the Matthew effect, since a search will prioritize the articles with lots of citations. It has long been noted that <a href="http://www.enid-europe.org/conference/abstract%20pdf/Klavans_Boyack_superstars.pdf">only a few “superstars” in science emerge over time</a> – but current practices have supercharged the process because of the <a href="https://doi.org/10.1073/pnas.98.2.404">agglomerating effects of being listed as the primary author</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=399&fit=crop&dpr=1 600w, https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=399&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=399&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/188684/original/file-20171003-18916-1lcaai2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Nobel stage in Stockholm doesn’t have space for everyone.</span>
<span class="attribution"><a class="source" href="https://www.nobelprize.org/press/#/image-details/585104ccffb1110d00062b3e/552bd85dccc8e20c00e7f979?sh=false">© Nobel Media AB Pi Frisk.</a></span>
</figcaption>
</figure>
<h2>Who stays behind</h2>
<p>The Matthew Effect is likely part of the reason that three white men came out “on top” in the case of the 2017 Nobel Prize in physics. The downside of needing a primary author on a collaborative paper means that collaborators, such as notable women who also worked on LIGO, sit in the shadows. <a href="https://doi.org/10.1002/asi.1097">Women’s names are much more likely</a> to be listed second, third or farther down the list of authors on scientific papers. It can be difficult for <a href="https://doi.org/10.1371/journal.pbio.2001003">women to claim to top spot</a>.</p>
<p>No doubt when the current Nobel Prize winners in physics accept their award, they will point to “others” who have been instrumental in helping. Yet, the essentially collaborative nature of the work – many paying nations, many collaborating disciplines, a multitude of people – begs the question: Can the award fairly be claimed by three (white, American, male) people? The Nobel Prize, developed to recognize 19th-century creativity, may no longer reflect the true contributions within 21st-century science.</p><img src="https://counter.theconversation.com/content/85161/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Caroline Wagner does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Today’s scientific research is characterized by interdisciplinary, international collaboration. Awards like the Nobel Prizes haven’t caught up.
Caroline Wagner, Milton & Roslyn Wolf Chair in International Affairs, The Ohio State University
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/84223
2017-09-20T14:17:37Z
2017-09-20T14:17:37Z
The peer review system has flaws. But it’s still a barrier to bad science
<figure><img src="https://images.theconversation.com/files/186593/original/file-20170919-22701-1l6j0rq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Research must be carefully scrutinised by peer reviewers to ensure its veracity.</span> <span class="attribution"><span class="source">Nattapat Jitrungruengnij/Shutterstock</span></span></figcaption></figure><p>Democracy and scientific peer review have something in common: it’s a “system full of problems but the least worst we have”. That’s the view of <a href="http://blogs.bmj.com/bmj/category/columnists/richard-smith/">Richard Smith</a>, a medical doctor and former editor of the illustrious <a href="http://www.bmj.com/">British Medical Journal</a>. </p>
<p>Wiley, a large academic publishing house, <a href="https://authorservices.wiley.com/Reviewers/journal-reviewers/what-is-peer-review/index.html">says that</a>:</p>
<blockquote>
<p>Peer review is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles.</p>
</blockquote>
<p>Another publishing house, Springer, <a href="https://www.springer.com/gp/authors-editors/editors/peer-review/32888">describes</a> peer reviewers as being “almost like intellectual gatekeepers to the journal as they provide an objective assessment of a paper and determine if it is useful enough to be published”. </p>
<p>The peer review system has received a fair amount of <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1420798/">negative press</a> in recent years. It has been criticised largely because it is not particularly transparent and depends on a small number of peer reviews, an approach that can lend itself to cronyism. In addition it depends on trust: trust that reviewers will be fair and are willing to put sufficient time into a critical review. In this era of overworked academics being asked to do ever more, “sufficient time” is in short supply.</p>
<p>Despite these concerns, I agree with Smith: peer review is the “least worst” system available for assessing academic research and maintaining science’s integrity. Having worked in academia for the past 30 years and currently serving as Vice President of the <a href="https://www.assaf.org.za/">Academy of Science of South Africa</a>, I believe peer review and the publication process is perhaps more important than ever in this era of “fake news” – and not just for scientists and academics. Thorough review and robust pre and post publication engagement by a scientist’s peers are crucial if the average person in the street is to navigate a world full of pseudo-science.</p>
<h2>Scientific truth is built on replication</h2>
<p>One classic case of scientific fraud was the “<a href="http://www2.clarku.edu/%7EPiltdown/map_report_finds/pilt_man_discover.html">Piltdown man</a>” in 1912. Bone fragments supposed to be from an archaeological site in England were presented as a human ancestor. The alleged discovery of an early hominid in England was comfortable for British and European scientists at the time as it suggested that humans evolved in Europe. But this report was the source of controversy for many years. </p>
<p>While the Piltdown man has been recognised as a hoax since 1953, <a href="http://rsos.royalsocietypublishing.org/content/3/8/160328">DNA evidence</a> of the fact that the bones come from both an orangutan and probably two human specimens was only recently published. </p>
<p>This case illustrates both the strengths and weakness of the scientific publishing system. The hoax was possibly published because it fitted with the theories of the time. The report was, however, hugely controversial; was re-examined and with time was shown by scientists to be fraudulent. </p>
<p>This is a good starting point for understanding how real science works; how research is peer reviewed and critically examined before what is reported can be considered scientific fact.</p>
<p>Perfect science is never based on a single publication. Each publication is essentially a hypothesis: it will be read by other researchers, who will try to repeat or adapt what was done and then publish their own findings.</p>
<p>The peer review system is more complex than a reviewer just rejecting or accepting a manuscript. Quite often a reviewer suggests other experiments that authors have overlooked or different interpretations for some of the data. This means reviewers add significantly to improving the research and analysis that is performed. </p>
<p>There is no question that the reviews that I receive from higher impact factor journals are, on average, more critical and more useful. The impact factor is <a href="https://www.une.edu.au/library/support/eskills-plus/mastering-the-academic-literature/journal-quality">calculated</a> “by dividing the number of current citations to articles published [in the journal] in the two previous years by the total number of articles published in the two previous years”.</p>
<p>In fact in some cases a strong review will send me and my collaborators back to the laboratory and in so doing significantly strengthen our research. The amazing thing about this is that no fee is asked for these reviews. Yet scientists across the world do them willingly.</p>
<p>So scientific truth is based on a body of research which has been tried and tested by many researchers over time. You might ask, then, what value peer review offers – since, over time, an article that was found suitable for publication and further debate by peer reviewers may be debunked.</p>
<h2>Why do we need peer review?</h2>
<p>Peer review provides a filtering system. Studies that are not well conceived or performed will <a href="https://www.editage.com/insights/most-common-reasons-for-journal-rejection">not be published</a>. They will be filtered out either by a journal’s editor or the reviewers. This means that what appears in the scientific literature is more likely to be of a higher quality. Readers of the peer reviewed literature know that it has been subjected to some level of critique. It is not merely the authors’ opinion that what’s being proposed in a particular article is the truth.</p>
<p>Editors and reviewers of peer review journals demand a particular style and level of experimental rigour. Results are substantiated with graphs, diagrams and in some cases photographs. Experiments are always repeated at least once and sometimes more often. Data is subjected to analysis and in some cases statistical methods are used to prove significance. </p>
<p>But how can the quality of a journal be measured in the first place?</p>
<p>A quick Google search throws up many hundreds of scientific journals. Many of these are likely to be <a href="http://beallslist.weebly.com">predatory</a>, charging authors publication fees without providing the sorts of publishing and editing services offered by legitimate journals.</p>
<p>An ordinary reader should find out which association, society or organisation publishes the journal. Alternatively, take a look at the editorial board.</p>
<p>Respected scientists do not link their names to journals they do not respect. Any respected scientist in a discipline knows which are the “good” journals – a decision they make by looking at the quality of the science in such publications.</p>
<p>Next time you read some interesting report or scientific news it’s worth using the internet to check to see if the report is in fact supported by peer reviewed literature that meets these standards. At the very least do this before you share it on Facebook and add to the pseudo-science that already exists.</p>
<h2>The best system for now</h2>
<p>Until such time as there is a better system, peer review and the subsequent publication process with experimental repetition is the only source of substantiated evidence available. Similar to democracy we all need to understand its strengths and weaknesses.</p><img src="https://counter.theconversation.com/content/84223/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brenda Wingfield receives funding from National Research Foundation, Tree Protection Co-operative Programme and is vice president of ASSAf.</span></em></p>
Scientific truth is based on a body of research which has been tried and tested by many researchers over time. Peer review filters the good science from the bad.
Brenda Wingfield, Vice President of the Academy of Science of South Africa and DST-NRF SARChI chair in Fungal Genomics, Professor of Genetics, University of Pretoria
Licensed as Creative Commons – attribution, no derivatives.
tag:theconversation.com,2011:article/62936
2016-08-09T00:22:47Z
2016-08-09T00:22:47Z
Here’s how competition makes peer review more unfair
<figure><img src="https://images.theconversation.com/files/133404/original/image-20160808-18053-n0yxqr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What are the implications of peer review on competition in science?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/wheatfields/1823836311/in/photolist-3MaCZD-pFtAKK-dbWvqC-56Jkta-ohrv6Y-cNpiqN-7gsUPm-56JkqV-4z9Kkf-2vVkpu-6ss3X5-a6YxaA-ph2K7C-p1DLYB-6Mw73S-o5zkQo-bbBnXr-7Pfj3S-bixL5T-4CX1b8-FLLyVh-dABED9-odLsUp-JDPJKo-6EAhPB-8H64QP-7tvVwq-aFzQmz-7cCUeW-6LFb8z-c8m5oY-8xhfNt-8sc1Xu-bWDHha-DSrJd-hJuMTm-bixKZv-s2a4p1-iYVLzN-nHKB6W-3iVZdR-ba4FpF-nTZgkJ-ekEQgP-2LRHmX-sfZ95s-hxCFnY-qdAhKb-c8m59d-fif4Fn">PROChristian Guthier</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>A scientist can spend several months, in many cases even years, strenuously investigating a single research question, with the ultimate goal of making a contribution – little or big – to the progress of human knowledge. </p>
<p>Succeeding in this hard task requires specialized, years-long training, intuition, creativity, in-depth knowledge of current and past theories and, most of all – lots of <a href="http://www.sciencemag.org/careers/2012/09/so-you-think-you-have-skills">perseverance</a>.</p>
<p>As a member of the scientific community, I can say that, sometimes, finding an interesting and novel result is just as hard as convincing your colleagues that your work actually is novel and interesting. That is, the work would deserve publication in a scientific journal.</p>
<p>But, prior to publication, any investigation must pass the <a href="https://theconversation.com/the-logic-of-journal-embargoes-why-we-have-to-wait-for-scientific-news-53677">screening</a> of the “peer review.” This is a critical part of the process – only after peer review can a work be considered part of the scientific literature. And only peer-reviewed work will be counted during hiring and evaluation, as a valuable unit of work.</p>
<p>What are the implications of the current publication system – based on peer review – on the progress of science at a time when <a href="http://iai.asm.org/content/83/4/1229.long">competition among scientists is rising?</a> </p>
<h2>The impact factor and metrics of success</h2>
<p>Unlike in math, not every publication counts the same in science. In fact, at least initially, to the eye of an hiring committee the weight of a publication is primarily given by the “impact factor” of the journal in which it <a href="http://www.sciencemag.org/careers/2013/09/beyond-cvs-and-impact-factors-employers-manifesto">appears</a>.</p>
<p>The impact factor is a metric of success that counts the average past “citations” of articles published by a journal in previous years. That is, how many times an article is referenced by other published articles in any other scientific journal. This index is a proxy for the prestige of a journal, and an indicator of the expected future citations of a prospective article in that journal. </p>
<p>For example, according to Google Scholar Metrics 2016, the journal with the <a href="https://googlescholar.blogspot.com/2016/07/2016-scholar-metrics-released_14.html">highest impact factor is Nature</a>. For a young scientist, publishing in journals like Nature can represent a career turning point, a shift from spending an indefinite number of extra years in a more or less precarious academic position to getting a university tenure. </p>
<p>Given its importance, publishing in top journals is extremely difficult, and rejection rates range from 80 percent to 98 percent. Such high rates imply that sound research can also fail to make it into top journals. Often, valuable studies rejected by top journals end up in <a href="http://www.sciencemag.org/careers/2008/08/if-first-you-dont-succeed-cool-revise-and-submit-again">lower-tier journals</a>.</p>
<h2>Big discoveries also got rejected</h2>
<p>We do not have an estimate of how many potentially groundbreaking discoveries we have missed, but we do have records of a <a href="http://link.springer.com/article/10.1140/epjst/e2011-01403-6">few exemplary wrong rejections</a>. </p>
<p>For example, economist <a href="https://www.econ.berkeley.edu/faculty/803">George A. Akerlof’s</a> seminal paper, <a href="https://www.iei.liu.se/nek/730g83/artiklar/1.328833/AkerlofMarketforLemons.pdf">“The Market for Lemons,”</a> which introduced the concept of “asymmetric information” (how decisions are influenced by one party having more information), was rejected several times before it could be published. Akerlov was later awarded the Nobel Prize for this and other <a href="https://www.aeaweb.org/articles?id=10.1257/jep.8.1.165">later work</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=386&fit=crop&dpr=1 600w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=386&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=386&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=485&fit=crop&dpr=1 754w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=485&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=485&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Competition can increase innovation. Does it improve fairness in peer review?</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-382077415/stock-photo-group-of-young-scientists-studying-new-substances-in-flasks.html?src=UxwuJu-YbpYoaLw1WVYNvA-1-1">Scientists image via www.shutterstock.com</a></span>
</figcaption>
</figure>
<p>That’s not all. Only last year, it was shown that three of the top medical journals <a href="http://www.pnas.org/content/112/2/360.full">rejected 14 out 14 of the top-cited articles</a> of all time in their discipline.</p>
<p>The question is, how could this happen?</p>
<h2>Problems with peer review</h2>
<p>It might seem surprising to those outside the academic world, but until now there has been little empirical investigation on the institution that approves and rejects all scientific claims.</p>
<p>Some scholars even complain that peer review itself has <a href="http://www.sciencedirect.com/science/article/pii/S0165614700016187">not been scientifically validated</a>. The main reason behind the lack of empirical studies on peer review is the difficulty in accessing data. In fact, peer review data is considered very sensitive, and it is very seldom released for scrutiny, even in an <a href="http://science.sciencemag.org/content/341/6152/1331">anonymous form</a>.</p>
<p>So, what is the problem with peer review? </p>
<p>In the first place, assessing the quality of a scientific work is a hard task, even for trained scientists, and especially for innovative studies. For this reason, reviewers can often be in <a href="http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/0003-066X.45.5.591">disagreement</a> about the merits of an article. In such cases, the editor of a high-profile journal usually takes a conservative decision and rejects it. </p>
<p>Furthermore, for a journal editor, finding competent reviewers can be a daunting task. In fact, reviewers are themselves scientists, which means that they tend to be extremely busy with other tasks like teaching, mentoring students and developing their own research. A review for a journal must be done on top of normal academic chores, often implying that a scientist can dedicate <a href="http://www.nature.com/news/open-access-is-tiring-out-peer-reviewers-1.16403">less time to it than it would deserve</a>.</p>
<p>In some cases, journals encourage authors to <a href="https://www.researchgate.net/post/Is_it_a_good_thing_that_journals_ask_you_to_recommend_a_reviewer_for_you_article">suggest</a> reviewers’ names. However, this feature, initially introduced to help the editors, has been unfortunately <a href="http://www.nature.com/news/publishing-the-peer-review-scam-1.16400">misused</a> to create peer review rings, where the suggested reviewers were accomplices of the authors, or even the authors themselves with secret accounts.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">There are many problems with the peer review process.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/thomashawk/207153467/in/photolist-jiHvv-arvAm8-h1wCPX-e5BosA-jNZvxX-4Jp8FS-6hzXLM-6hAE8P-7gvTwm-i4WJsM-6hAAXg-51Gxsn-nHWyWM-6hAN2p-m4LXRM-6hAgx6-fzodPQ-6hAZgH-6hB4mB-6hEW5s-o1q8U8-568Rub-6hAAFZ-6hAbve-o17UTi-6hA4Tt-6hF3Hh-6hB2RV-6hEKtd-6hA7VZ-eJi1gb-6hEXxh-6hEndC-6hAg6P-snRpMc-6hAZdX-6hAJkK-6hEo5d-dbuvmP-6hARjt-6hAk32-6hEtZw-7grX6z-AXoxkm-6hEoaL-o3cu2K-6hzWQM-6hAPeD-7gkWHt-6hzYy8">Thomas Hawk</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Furthermore, reviewers have no <a href="https://www.timeshighereducation.com/news/should-academics-be-paid-for-peer-review">direct incentive to do a good review</a>. They are not paid, and their names do not appear in the published article. </p>
<h2>Competition in science</h2>
<p>Finally, there is a another problem, which has become worse in the last 15-20 years, where academic competition for funding, positions, publication space and credits has increased along with the <a href="http://www.nature.com/news/2011/110420/full/472276a.html">growth of the number of researchers</a>. </p>
<p>Science is a winner-take-all enterprise, where whoever makes the decisive discovery first gets all the fame and credit, whereas all the remaining researchers are forgotten. The competition can be fierce and the <a href="http://undsci.berkeley.edu/article/dna_01">stakes high</a>. </p>
<p>In such a competitive environment, experiencing an erroneous rejection, or simply a delayed publication, might have huge costs to bear. That is why some Nobel Prize winners no longer hesitate to publish their results in <a href="http://link.springer.com/article/10.1140/epjst/e2011-01403-6">low-impact journals</a>.</p>
<h2>Studying competition and peer review</h2>
<p>My coauthors and I wanted to know the impact such competition could have on peer review. We decided to <a href="http://www.pnas.org/content/early/2016/07/05/1603723113.full">conduct a behavioral experiment</a>. </p>
<p>We invited 144 participants to the laboratory and asked them to play the “Art Exhibition Game,” a simplified version of the scientific publication system, translated into an artistic context. </p>
<p>Instead of writing scientific articles, participants would draw images via a <a href="http://nodegame.org">special computer interface</a>. And instead of choosing a journal for publication, they would choose one of the available exhibitions for display. </p>
<p>The decision whether an image was good enough for a display would then be taken following the rule of “double-blind peer review,” meaning that reviewers were anonymous to the authors and vice versa. This is the same procedure adopted by the majority of academic journals. </p>
<p>Images that received high review scores were to be displayed in the exhibition of choice. They would also generate a monetary reward for the author.</p>
<p>This experiment allowed us to track for the first time the behavior of both reviewers and creators at the same time in a creative task. The study produced novel insights on the coevolution of the two roles and how they reacted to increases in the level of competition, which we manipulated experimentally. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">How does peer review work on a creative task? (The image is for illustrative purpose and does not represent the actual experiment.)</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/catalystopensource/23887038194/in/photolist-CoPjf7-pjsGeB-Ekdvx4-EkcBtK-skACs5-dTcgR6-y8Uvc-p7qzMq-aef4qs-aef23y-daz6kD-s1zL9z-81Jimn-ao99C-2cfeNp-rZQMqF-8vSLwq-dtKGpR-e4HMjF-cmoLpJ-fCZUYM-adLKF5-eJ1gPM-y8Udb-63eF7y-wUQ2n-GNfGw-bvL8q9-ACVDWq-ACVGKS-BujNYV-ACV4c4-qR4AHy-GaZRJy-ACV2At-AMAF77-AMB23E-BcrpJo-FoGSYd-Gmi3ss-phbAkK-dJvgNh-cEJnWs-p2rqH-p2rpS-p2rqg-p2rtS-cEJrdJ-p2rpv-p2rsr">Catalyst Open Source</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In one condition, all the images displayed generated a fixed monetary reward. In another condition – the “competitive condition” – the reward for a display would be divided among all the successful authors. </p>
<p>This situation was designed to resemble the surge in competition for tenure tracks, funding and attention that <a href="http://science.sciencemag.org/content/344/6186/809.full">science has been experiencing</a> in the last 15-20 years. </p>
<p>We wanted to investigate three fundamental aspects of competition: 1) Does competition promote or reduce innovation? 2) Does competition reduce or improve the fairness of the reviews? 3) Does competition improve or hamper the ability of reviewers to identify valuable contributions?</p>
<h2>Here is what we found</h2>
<p>Our results showed that competition acted as a double-edged sword on peer review. On the one side, it increased the diversity and the innovativeness of the images over time. But, on the other side, competition sharpened the conflict of interest between reviewers and creators. </p>
<p>Our experiment was set up in a such a way that in each round of the experiment a reviewer would review three images on a scale from 0 to 10 (self-review was not allowed). So, if the reviewer and the (reviewed) author chose the same exhibition, they would be in direct competition. </p>
<p>We found that a consistent number of reviewers, aware of this competition, purposely downgraded the review score of the competitor to gain a personal advantage. In turn, this behavior led to a lower level of agreement between reviewers. </p>
<p>Finally, we also asked a sample of 620 external evaluators recruited from <a href="https://www.mturk.com">Amazon Mechanical Turk</a> to rate the images independently. </p>
<p>We found out that competition did not improve the average level of creativity of the images. In fact, with competition many more works of good quality got rejected, whereas in the noncompetitive condition more works of lower quality got accepted. </p>
<p>This highlights the trade-off in the current publication system as well.</p>
<h2>What we learned</h2>
<p>The experiment confirmed there is a need to reform the current publication system. </p>
<p>One way to achieve this goal could be to allow scientists to be evaluated in the long term, which in turn would decrease the conflict of interest between authors and reviewers.</p>
<p>This policy could be implemented by granting <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1756-2171.2011.00140.x/abstract;jsessionid=5B2E8B1BB12AFE06FC287F11EE2D8880.f03t01">long-term funding</a> to scientists, reducing the urge to publish innovative works prematurely and giving them time to strengthen their results in front of peer review. </p>
<p>Another way could imply removing the requirement of “importance” of a scientific study, as some journals, like <a href="http://journals.plos.org/plosone/s/reviewer-guidelines">PLoS ONE</a>, are already doing. This would give higher chances to more innovative studies to pass the screening of peer review.</p>
<p><a href="http://www.peere.org/">Discussing openly</a> the problems of peer review is the first step toward solving them. Having the courage to experiment with alternative solutions is the second.</p><img src="https://counter.theconversation.com/content/62936/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stefano Balietti does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>
Peer review is a crucial part of the academic publication system. It is also a critical part of the hiring and evaluation process. What’s the problem with peer review?
Stefano Balietti, Postdoctoral Research Fellow, Northeastern University
Licensed as Creative Commons – attribution, no derivatives.