tag:theconversation.com,2011:/uk/topics/bad-science-10345/articlesBad science – The Conversation2021-12-12T08:36:03Ztag:theconversation.com,2011:article/1730792021-12-12T08:36:03Z2021-12-12T08:36:03ZOne virus, two countries: how the misuse of science compounded South Africa’s COVID crisis<figure><img src="https://images.theconversation.com/files/435549/original/file-20211203-23-1j1hhlt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A Soweto resident walks past a graffiti art wall educating locals about the dangers of COVID-19 in South Africa</span> <span class="attribution"><span class="source">Kim Ludbrook/EFE-EPA</span></span></figcaption></figure><p><a href="https://www.bbc.com/news/world-59442129">Now</a>, and in the past, “following the science” on COVID-19 has landed South Africa in trouble. This is not an indictment of science, but of the way it is understood in South Africa.</p>
<p>My new book <a href="https://witspress.co.za/catalogue/one-virus-two-countries">One Virus, Two Countries</a>, which examines South Africa’s response to COVID-19’s arrival in 2020, points out that South Africa fared <a href="http://www.thepresidency.gov.za/newsletters/desk-president%2C-monday%2C-18-may-2020">far worse than the rest of Africa</a> – its case and death numbers were equal to those of the rest of Africa combined. While it is commonly claimed that this is because South Africa tests more, its own scientists have acknowledged that this is not. The book argues that this happened because the minority which takes part in South African public life <a href="https://ewn.co.za/2020/11/03/sa-can-learn-from-europe-s-covid-19-second-wave-say-experts">is fixated on Europe and North America</a>.</p>
<p>Repeated claims that the country was <a href="https://mg.co.za/coronavirus-essentials/2020-06-09-covid-19-free-the-evidence/">“following the science”</a> really meant it was following a particular science <a href="https://ewn.co.za/2020/11/03/sa-can-learn-from-europe-s-covid-19-second-wave-say-experts">followed by some Western scientists</a> – one which ensured high case and death rates because it meant not taking enough of the protective measures needed to prevent the virus’s spread, not only restrictions when they were needed but also <a href="https://theconversation.com/south-africas-covid-19-testing-strategy-needs-urgent-fixing-heres-how-to-do-it-138225">testing and tracing </a>the contacts of infected people.</p>
<p>Only weeks after the virus began circulating, <a href="https://www.samrc.ac.za/people/prof-salim-abdool-karim">Salim Abdool Karim</a>, who was then in effect the government’s chief scientific advisor, declared that a <a href="https://www.science.org/content/article/ticking-time-bomb-scientists-worry-about-coronavirus-spread-africa">“severe epidemic” was inevitable</a> because no country had avoided one – he urged authorities to prepare for bereavements. </p>
<p>It soon became clear that he was expressing a consensus among South African scientists, or at least those quoted in the media. </p>
<p>Daily headlines showed it was not true that no country had avoided a severe epidemic. Many had done just that including South Korea, which faced rapid spread of the virus but avoided a severe outbreak, showing that it was possible to keep cases and deaths down even after the virus began spreading. Although successive waves of COVID-19 did drive up cases and deaths in these countries, they have still avoided severe epidemics: South Korea’s population is similar to South Africa’s but it has <a href="https://graphics.reuters.com/world-coronavirus-tracker-and-maps/countries-and-territories/south-korea/">lost only around 4,000 people</a> to COVID, less than one twenty-fifth of South Africa’s official death toll.</p>
<p>What the scientists said was not <em>“the science”</em> but <em>“a science”</em>, a particular view rejected by many scientists around the world. </p>
<h2>The big divide</h2>
<p>Medical scientists were divided between those who believed every effort should be made to fight the virus and those who argued only for managing it. The group that argued for just managing it – some of whom influenced Donald Trump and the UK government – <a href="https://www.nytimes.com/2020/10/13/world/white-house-embraces-a-declaration-from-scientists-that-opposes-lockdowns-and-relies-on-herd-immunity.html">were against restrictions</a>. </p>
<p>South Africa’s publicly quoted scientists were in this last camp, which is why most <a href="https://theconversation.com/south-africa-needs-to-end-the-lockdown-heres-a-blueprint-for-its-replacement-136080">denounced the lockdown</a>. Remarkably, while scientists elsewhere hotly debated this highly contentious view, South Africa’s publicly quoted scientists all endorsed it. </p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=841&fit=crop&dpr=1 600w, https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=841&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=841&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1057&fit=crop&dpr=1 754w, https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1057&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/435317/original/file-20211202-21915-4cy5gq.JPG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1057&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
</figcaption>
</figure>
<p>The government, while differing with some scientists on lockdowns, agreed that much disease and death was inevitable. Zweli Mkhize, the then minister of health, echoing scientists, declared that <a href="https://www.enca.com/news/coronavirus-sa-more-60-percent-people-will-be-infected">60% of the country would be touched by COVID</a>. This was the figure cited by scientists who peddled the <a href="https://www.nature.com/articles/d41586-020-02948-4">now discredited theory</a> that the virus should be allowed <a href="https://www.thelancet.com/article/S2213-2600%2820%2930555-5/fulltext">to spread until it ran out of hosts</a> – which was greeted with horror elsewhere because it implied that many would have to die but was a consensus view in South Africa’s debate.</p>
<p>The media agreed. During 2020, not one scientist was asked a single critical question although much of what they said was disputed by scientists elsewhere and some of their claims were clearly wrong. </p>
<p>I argue in the book that COVID-19 is among the media’s most shameful moments – it treated scientists much as media in totalitarian countries treat government leaders.</p>
<p>Business first supported the lockdown, then lobbied against restrictions. The media helped it – while other countries’ television channels showed pain and death in hospitals and cemeteries, South Africa’s was interested only in the loud pain of travel agents and restaurant owners.</p>
<h2>‘First world’ bias</h2>
<p>The scientists did not advocate surrender because they were not up on the latest debates. They did it because South Africa remains divided between <a href="https://theconversation.com/south-africa-remains-a-nation-of-insiders-and-outsiders-27-years-after-democracy-159561">“insiders” and “outsiders”</a> and they, like the media which fawned over them and the lobbies which began mobilising against health restrictions weeks into the pandemic, see the world through “insider” lenses.</p>
<p>Internationally, the pandemic disturbed the view which divides the planet into first and third worlds, the first an island of competence and health in a sea of third world sickness and savagery. The countries which did least to protect people were in the first, not the third.</p>
<p>The planet’s divide is also South Africa’s. The minority which is heard in the national debate lives and thinks like the first world. It is fixated on Western countries whether it praises or criticises them. And it sees the rest of South Africa as a third world of poverty and incapacity.</p>
<p>When the scientists said no country had avoided a severe epidemic, they meant no country they noticed – no first world country. When they said South Africa was doomed to suffer, they assumed that the third world majority would be too ignorant to protect themselves – and that only first world medicine would work but that the country did not have enough of it. The rest of the first world see South Africa in the same way.</p>
<p>It was the biases of its first world which prevented South Africa from mobilising the energies and talents of most of its people (many of whom were far less ignorant of the virus than their first world betters) to reduce cases and deaths to levels elsewhere in Africa. </p>
<p>Those biases may also now ensure that most people are not vaccinated because vaccine arrangements are tailored to the first world.</p><img src="https://counter.theconversation.com/content/173079/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Steven Friedman does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>It was the biases of its ‘first world’ which prevented South Africa from mobilising the energies and talents of most of its people against COVID-19.Steven Friedman, Professor of Political Studies, University of JohannesburgLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1537082021-03-15T12:56:48Z2021-03-15T12:56:48Z6 tips to help you detect fake science news<figure><img src="https://images.theconversation.com/files/389103/original/file-20210311-20-90hym5.jpg?ixlib=rb-1.1.0&rect=781%2C889%2C4508%2C3098&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If what you're reading seems too good to be true, it just might be.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/dhCGbPx8wpk">Mark Hang Fung So/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>I’m a professor of chemistry, have a Ph.D. and <a href="https://scholar.google.com/citations?user=RpiSPiwAAAAJ&hl=en&oi=ao">conduct my own scientific research</a>, yet when consuming media, even I frequently need to ask myself: “Is this science or is it fiction?”</p>
<p>There are plenty of reasons a science story might not be sound. Quacks and charlatans take advantage of the complexity of science, some content providers can’t tell bad science from good and some politicians peddle fake science to support their positions.</p>
<p>If the science sounds too good to be true or too wacky to be real, or very conveniently supports a contentious cause, then you might want to check its veracity.</p>
<p>Here are six tips to help you detect fake science.</p>
<h2>Tip 1: Seek the peer review seal of approval</h2>
<p>Scientists rely on journal papers to share their scientific results. They let the world see what research has been done, and how.</p>
<p>Once researchers are confident of their results, they write up a manuscript and send it to a journal. Editors forward the submitted manuscripts to at least two external referees who have expertise in the topic. These reviewers can suggest the manuscript be rejected, published as is, or sent back to the scientists for more experiments. That process is called “peer review.”</p>
<p>Research published in <a href="https://undsci.berkeley.edu/article/howscienceworks_16">peer-reviewed journals</a> has undergone rigorous quality control by experts. Each year, about <a href="https://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf">2,800 peer-reviewed journals</a> publish roughly 1.8 million scientific papers. The body of scientific knowledge is constantly evolving and updating, but you can trust that the science these journals describe is sound. Retraction policies help correct the record if mistakes are discovered post-publication.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man in white coat in lab at laptop" src="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Peer-reviewed’ means other scientific experts have checked the study over for any problems before publication.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/scientist-using-computer-in-laboratory-royalty-free-image/1194829395">ljubaphoto/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Peer review takes months. To get the word out faster, scientists sometimes post research papers on what’s called a preprint server. These often have “RXiv” – pronounced “archive” – in their name: MedRXiv, BioRXiv and so on. These articles have not been peer-reviewed and so are <a href="https://doi.org/10.1080/10410236.2020.1864892">not validated by other scientists</a>. Preprints provide an opportunity for other scientists to evaluate and use the research as building blocks in their own work sooner.</p>
<p>How long has this work been on the preprint server? If it’s been months and it hasn’t yet been published in the peer-reviewed literature, be very skeptical. Are the scientists who submitted the preprint from a reputable institution? During the COVID-19 crisis, with researchers scrambling to understand a dangerous new virus and rushing to develop lifesaving treatments, preprint servers have been littered with immature and unproven science. <a href="https://arstechnica.com/science/2020/05/a-lot-of-covid-19-papers-havent-been-peer-reviewed-reader-beware/">Fastidious research standards have been sacrificed for speed</a>.</p>
<p>A last warning: Be on the alert for research published in what are called <a href="https://www.nature.com/articles/d41586-019-03759-y">predatory journals</a>. They don’t peer-review manuscripts, and they charge authors a fee to publish. Papers from any of the <a href="https://guides.library.yale.edu/c.php?g=296124&p=1973764">thousands of known predatory journals</a> should be treated with strong skepticism.</p>
<h2>Tip 2: Look for your own blind spots</h2>
<p>Beware of biases in your own thinking that might predispose you to fall for a particular piece of fake science news.</p>
<p>People give their own memories and experiences more credence than they deserve, making it hard to accept new ideas and theories. Psychologists call this quirk the availability bias. It’s a useful built-in shortcut when you need to make quick decisions and don’t have time to critically analyze lots of data, but it messes with your fact-checking skills.</p>
<p>In the fight for attention, sensational statements beat out unexciting, but more probable, facts. The tendency to overestimate the likelihood of vivid occurrences is called the salience bias. It leads people to mistakenly believe overhyped findings and trust confident politicians in place of cautious scientists.</p>
<p>A confirmation bias can be at work as well. People tend to give credence to news that fits their existing beliefs. This tendency helps climate change denialists and anti-vaccine advocates believe in their causes in spite of the scientific consensus against them.</p>
<p>Purveyors of fake news know the weaknesses of human minds and try to take advantage of these natural biases. <a href="https://www.huffpost.com/entry/how-to-overcome-cognitive-bias-and-use-it-to-your-advantage_b_5900fff3e4b00acb75f1844f">Training can help you</a> <a href="https://hbr.org/2015/05/outsmart-your-own-biases">recognize and overcome</a> your own cognitive biases.</p>
<h2>Tip 3: Correlation is not causation</h2>
<p>Just because you can see a relationship between two things doesn’t necessarily mean that one causes the other.</p>
<p>Even if surveys find that people who live longer drink more red wine, it doesn’t mean a daily glug will extend your life span. It could just be that red-wine drinkers are wealthier and have better health care, for instance. Look out for this error in nutrition news.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="gloved hand holds a mouse" src="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What works well in rodents might not work at all in you.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/face-of-tiny-white-mouse-peeps-out-royalty-free-image/157440932">sidsnapper/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>Tip 4: Who were the study’s subjects?</h2>
<p>If a study used human subjects, check to see whether it was placebo-controlled. That means some participants are randomly assigned to get the treatment – like a new vaccine – and others get a fake version that they believe is real, the placebo. That way researchers can tell whether any effect they see is from the drug being tested. </p>
<p>The best trials are also double blind: To remove any bias or preconceived ideas, neither the researchers nor the volunteers know who is getting the active medication or the placebo.</p>
<p>The size of the trial is important too. When more patients are enrolled, researchers can identify safety issues and beneficial effects sooner, and any differences between subgroups are more obvious. Clinical trials can have thousands of subjects, but some scientific studies involving people are much smaller; they should address how they’ve achieved the statistical confidence they claim to have.</p>
<p>Check that any health research was actually done on people. Just because a certain drug works <a href="https://twitter.com/justsaysinmice">in rats or mice</a> does not mean it will work for you.</p>
<h2>Tip 5: Science doesn’t need ‘sides’</h2>
<p>Although a political debate requires two opposing sides, a scientific consensus does not. When the media interpret objectivity to mean equal time, it undermines science. </p>
<h2>Tip 6: Clear, honest reporting might not be the goal</h2>
<p>To get their audience’s attention, morning shows and talk shows need something exciting and new; accuracy may be less of a priority. Many science journalists are doing their best to accurately cover new research and discoveries, but plenty of science media are better classified as entertaining rather than educational. <a href="https://www.bmj.com/content/349/bmj.g7346">Dr. Oz</a>, Dr. Phil and Dr. Drew should not be your go-to medical sources. </p>
<p>Beware of medical products and procedures that sound too good to be true. Be skeptical of testimonials. Think about the key players’ motivations and who stands to make a buck.</p>
<p>If you’re still suspicious of something in the media, make sure the news being reported reflects what the research actually found by <a href="https://www.sciencemag.org/careers/2016/03/how-seriously-read-scientific-paper">reading the journal article itself</a>.</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p><img src="https://counter.theconversation.com/content/153708/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marc Zimmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Whenever you hear about a new bit of science news, these suggestions will help you assess whether it’s more fact or fiction.Marc Zimmer, Professor of Chemistry, Connecticut CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/951392018-04-17T05:22:09Z2018-04-17T05:22:09ZScience isn’t broken, but we can do better: here’s how<figure><img src="https://images.theconversation.com/files/215127/original/file-20180417-101509-12bcoay.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The Golden Age of science is in the future.</span> <span class="attribution"><span class="source">Joker/Shutterstock</span></span></figcaption></figure><p>Every time a scandal breaks in one of the thousands of places where research is conducted across the world, we see headlines to the effect that “<a href="http://www.slate.com/articles/health_and_science/science/2017/05/science_is_broken_how_much_should_we_fix_it.html">science is broken</a>”.</p>
<p>But if it’s “broken” today, then when do we suggest it was better?</p>
<p>Point me to the period in human history where we had more brilliant people or better technologies for doing science than we do today. Explain to me how something “broken” so spectacularly delivers the goods. Convince me I ought to downplay the stunning achievement of – say – the detection of <a href="https://theconversation.com/au/topics/gravitational-waves-9473">gravitational waves</a>.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/no-science-minister-and-its-unclear-where-science-fits-in-australia-91739">No science minister, and it's unclear where science fits in Australia</a>
</strong>
</em>
</p>
<hr>
<p>I agree, practising science has its frustrations, like every other human endeavour; and scientists can and do go wrong.</p>
<p>But the only place to find the Golden Age of Science is in the future – by making it ourselves.</p>
<p>So let’s not tell ourselves that “science is broken”. Let’s agree that we all share in the responsibility to improve it, by keeping open the mental bandwidth to ask and explore hard questions.</p>
<p>Here, in no particular order, are some of the things that I’ve been thinking about.</p>
<h2>The future of the scientific paper</h2>
<p>Earlier this month The Atlantic magazine published a <a href="https://www.theatlantic.com/science/archive/2018/04/the-scientific-paper-is-obsolete/556676/">provocative essay</a> headlined “The scientific paper is obsolete”.</p>
<p>The scientific paper has done great things since it was <a href="https://blogs.scientificamerican.com/information-culture/the-mostly-true-origins-of-the-scientific-journal/">developed in the 1600s</a>. Today we could certainly say that production is booming.</p>
<p>But the peer-review system is critically overloaded. The irony is, we’re working so hard to generate papers, we don’t have time to read anybody else’s.</p>
<p>One has to ask, have we hit Peak Paper?</p>
<p>My tentative response is “no”. The scientific paper has endured for a reason, and it still holds. It’s an efficient way to structure and communicate information.</p>
<p>But what do you think? Will we still be publishing papers in 2050? And how else could we do it?</p>
<h2>The pressure to publish</h2>
<p>I was lucky to train under a great scientist, <a href="http://www.chiefscientist.gov.au/2016/10/article-steve-redman-australian-neuroscience-society/">Steve Redman</a>. These days we would describe him as unproductive: he published, at most, two or three papers each year. But every one of those papers was deeply considered, meticulously crafted and, as a result, deeply influential.</p>
<p>I think we would all agree that commitment to quality over quantity is the ideal. Authors could invest more time in their papers, and peer reviewers could invest more time in their critique.</p>
<p>In the real world, we know that the incentives often skew the other way. But where do you intervene to break the cycle?</p>
<p>I recently <a href="https://www.nature.com/news/give-researchers-a-lifetime-word-limit-1.22835">came across a radical suggestion</a>: a lifetime word limit for researchers. I suspect it would be very difficult to enforce but what about a variation: change the focus from publications to CVs.</p>
<p>For starters, let’s contemplate a rule that you can only list a maximum of five papers for any given year when applying for grants or promotions. Your CV would have to list retractions, with an explanation. </p>
<p>On the <a href="https://www.nature.com/news/faculty-promotion-must-assess-reproducibility-1.22596">recommendation of Jeffrey Flier</a>, the former Dean of the Harvard Medical School, candidates for promotion would have to critically assess their own work, including unanswered questions, controversies and uncertainties.</p>
<h2>Predatory journals</h2>
<p>If journals are the gatekeepers, then <a href="https://theconversation.com/au/topics/predatory-journals-21960">predatory journals</a> are the termites that eat the gates and make the community question the integrity of the structure.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/who-will-keep-predatory-science-journals-at-bay-now-that-jeffrey-bealls-blog-is-gone-71613">Who will keep predatory science journals at bay now that Jeffrey Beall's blog is gone?</a>
</strong>
</em>
</p>
<hr>
<p>A predatory journal is one that typically charges high fees for publication with little or no credible peer-review process. As such they have no credibility.</p>
<p>How do we fight back?</p>
<p>How do we arm people in the community who aren’t scientists, and don’t know anything about impact factors and journal rankings and editorial standards, to recognise quality?</p>
<p>Is there an analogy to fair-trade coffee: a stamp that consumers can look for on the product that demonstrates it complies with a certain standard?</p>
<p>Could we have an “ethical journal” stamp, building on the excellent work of the <a href="https://publicationethics.org/">Committee On Publication Ethics</a>?</p>
<h2>Artificial intelligence</h2>
<p>Bloomberg <a href="https://www.bloomberg.com/news/articles/2018-02-13/in-the-war-for-ai-talent-sky-high-salaries-are-the-weapons">reports</a> that there are now five ways to command a multi-year, seven-figure salary.</p>
<p>It used to be four: chief executive officer, banker, celebrity entertainer, professional athlete.</p>
<p>Now add on a person with a PhD in artificial intelligence (AI).</p>
<p>This is the AI century. Like all great waves in technology, it breaks on researchers first.</p>
<p>Time and time again, we get the future – we make the future – before it sweeps over everyone else. </p>
<p>But what does it mean for research training? What roles that scientists do today, will robots do tomorrow? What roles that no one can do today will become possible, with the power of humans and robots combined?</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/finkels-law-robots-wont-replace-us-because-we-still-need-that-human-touch-82814">Finkel's Law: robots won't replace us because we still need that human touch</a>
</strong>
</em>
</p>
<hr>
<h2>A better future</h2>
<p>To these, I could add more questions.</p>
<p>Let me simply conclude with the two things I know for certain. One, that these questions are crucial, because the future of science is the fate of the world. And two, that as long as we are scientists, we will never cease to ask them.</p>
<p>We will know that science is truly “broken” if we ever give up the quest to make it better.</p>
<hr>
<p><em>This article is based on a keynote speech Alan Finkel delivered at the 2018 <a href="http://www.qpr.edu.au/">Quality in Postgraduate Research Conference</a> in Adelaide, April 17.</em></p><img src="https://counter.theconversation.com/content/95139/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Alan Finkel does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The only place to find the Golden Age of Science is in the future, but we need some help in getting there.Alan Finkel, Australia’s Chief Scientist, Office of the Chief ScientistLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/868652017-11-09T12:39:28Z2017-11-09T12:39:28ZScience’s credibility crisis: why it will get worse before it can get better<figure><img src="https://images.theconversation.com/files/193759/original/file-20171108-26972-17v1ar7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Science itself needs to be put under the microscope and carefully scrutinised to deal with its flaws.</span> <span class="attribution"><span class="source"> Nattapat Jitrungruengnij/Shutterstock</span></span></figcaption></figure><p>Science’s credibility crisis is making headlines once more thanks to a <a href="http://onlinelibrary.wiley.com/doi/10.1111/ecoj.12461/full">paper</a> from John P. A. Ioannidis and co-authors. Ioannidis, an expert in statistics, medicine and health policy at Stanford University, has done more than anyone else to ring the alarm bells on science’s quality control problems: scientific results are published which other researchers cannot reproduce. </p>
<p>When the crisis erupted in the media in 2013 The Economist devoted it’s <a href="https://www.google.es/search?q=economist++science+goes+wrong&client=firefox-b&dcr=0&source=lnms&tbm=isch&sa=X&ved=0ahUKEwi-uu-D2JrXAhWFOhQKHYcZDF0Q_AUICigB&biw=1920&bih=971#imgrc=6VLfyJMejNEoVM:">cover</a> to “<a href="https://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong">Wrong Science</a>”. Ionannidis’s <a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124">work</a> was an important part of the background material for the <a href="https://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble">piece</a>.</p>
<p>In previous papers Ioannidis had mapped the troubles of fields such as <a href="https://www.nature.com/nature/journal/v483/n7391/full/483531a.html">pre-clinical</a> and <a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002049">clinical</a> medical studies; commenting how, <a href="http://www.jclinepi.com/article/S0895-4356(16)00147-5/pdf">under market pressure</a>, clinical medicine has been transformed to finance-based medicine.</p>
<p>In this new <a href="http://onlinelibrary.wiley.com/doi/10.1111/ecoj.12461/full">work</a> he and co-authors target empirical economics research. They conclude that the field is diseased, with one fifth of the subfields investigated showing a 90% incidence of under-powered studies – a good indicator of low-quality research – and a widespread bias in favour of positive effects. </p>
<p>The field of psychology had gone through a similar ordeal. Brian Nosek, professor of psychology at the University of Virginia and his co-workers ran a <a href="http://science.sciencemag.org/content/349/6251/aac4716">replication analysis</a> of 100 high-profile psychology studies and reported that only about one third of the studies could be replicated. </p>
<p>Several other instances of bad science have gained attention in the media.
The problems in <a href="https://replicationindex.wordpress.com/2017/02/02/reconstruction-of-a-train-wreck-how-priming-research-went-of-the-rails/comment-page-1/">“priming research”</a>, relevant to marketing and advertising, prompted Nobel Prize winner Daniel Kahneman to issue a publicised statement of <a href="https://www.nature.com/news/nobel-laureate-challenges-psychologists-to-clean-up-their-act-1.11535">concern</a> about the wave of failed replication. </p>
<p>And a study on “power poses”, which claimed that body posture influences a person’s hormones level and “feelings of power” went first viral on <a href="https://www.ted.com/talks/amy_cuddy_your_body_language_shapes_who_you_are">TED</a> when it was published – then again when its replication <a href="https://www.nytimes.com/2017/10/18/magazine/when-the-revolution-came-for-amy-cuddy.html?_r=0&utm_content=bufferab1e2&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer">failed</a>. </p>
<p>We are observing two new phenomena. On the one hand doubt is shed on the quality of entire scientific fields or sub-fields. On the other this doubt is played out in the open, in the media and blogosphere.</p>
<h2>Fixes</h2>
<p>In his <a href="http://onlinelibrary.wiley.com/doi/10.1111/ecoj.12461/full">newest work</a> Ioannidis sets out a list of remedies that science needs to adopt urgently. These include fostering a culture of replication, data sharing and more collaborative works that pool together larger data sets; along with pre-specification of the protocol including model specifications and the analyses to be conducted.</p>
<p><a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1001747">Ioannidis</a> has previously proposed additional remedies to “fix” science, as have <a href="http://orca.cf.ac.uk/97336/">other investigators</a>. The list includes better statistical methods and better teaching of statistics as well as measures to restore the right system of incentives at all stages of the scientific production system – from peer review to academic careers. </p>
<p>Important work is already being done by committed individuals and communities, among them Nosek’s <a href="https://osf.io/ezcuj/">Reproducibility Project</a>, Ioannidis’ <a href="https://metrics.stanford.edu/">Meta-research innovation centre</a>, Ben Goldacre’s <a href="http://www.alltrials.net/">alltrials.net</a> and the activities of <a href="http://retractionwatch.com/">Retraction Watch</a>. These initiatives – which attracted <a href="https://www.wired.com/2017/01/john-arnold-waging-war-on-bad-science/">private funding</a> – are necessary and timely.</p>
<p>But what are the chances that these remedies will work? Will this crisis be solved any time soon?</p>
<h2>Methods, incentives and introspection</h2>
<p>Ioannidis and co-authors are aware of the interplay between methods and incentives. For example, they say they’d refrain from suggesting that underpowered studies go unpublished, “as such a strategy would put pressure on investigators to report unrealistic and inflated power estimates based on spurious assumptions”.</p>
<p>This is a crucial point. Better practices will only be adopted if new incentives gain traction. In turn the incentives will have traction only if they address the right set of science’s problems and contradictions. </p>
<p>Ethics is a crucial issue in this respect. And here is where research effort is lacking. The broader field of economics is aware of its ethical problems after Paul Romer – now chief economist of the World Bank – coined the new term “<a href="https://paulromer.net/wp-content/uploads/2015/05/Mathiness.pdf">Mathiness</a>”, to signify the use of mathematics to veil normative premises. Yet there seem to be some hesitation to join the dots from the methodology to the ethos of the discipline, or of science overall.</p>
<p>The book <a href="https://www.amazon.com/Rightful-Place-Science-Verge/dp/0692596380">Science on the Verge</a> has proposed an analysis of the root causes of the crisis, including its neglected ethical dimension. The formulation of remedial measures depends on understanding <a href="http://www.sciencedirect.com/science/article/pii/S0016328717301969">what happened to science</a> and how this reflects on its <a href="https://theconversation.com/to-tackle-the-post-truth-world-science-must-reform-itself-70455">social role</a>, including when science feeds into <a href="http://www.sciencedirect.com/science/article/pii/S0016328717300472">evidence based policy</a>. </p>
<p>These analyses are indebted to philosophers Silvio O. Funtowicz and Jerome R. Ravetz, who spent several decades studying the <a href="https://en.wikipedia.org/wiki/Uncertainty_and_quality_in_science_for_policy">science’s quality control arrangements</a> and how quality and uncertainty impacted the <a href="http://www.sciencedirect.com/science/article/pii/001632879390022L">use of science for policy</a>.</p>
<p>Ravetz’s book “<a href="https://en.wikipedia.org/wiki/Scientific_Knowledge_and_Its_Social_Problems">Scientific knowledge and its social problems</a>” published in 1971 predicted several relevant features of the present crisis.</p>
<p>For Ravetz it is possible for a field <a href="https://en.wikipedia.org/wiki/Scientific_Knowledge_and_Its_Social_Problems">to be diseased</a>, so that shoddy work is routinely produced and accepted. Yet, he notes, it will be far from easy to come to accept the existence of such a condition – and even more difficult to reform it. </p>
<p>Reforming a diseased field or arresting the incipient decline of another will be delicate tasks, adds Ravetz, which calls for a </p>
<blockquote>
<p>sense of integrity, and a commitment to good work, among a significant section of the members of the field; and committed leaders with scientific ability and political skill. No quantity of published research reports, nor even an apparatus of institutional structures, can do anything to maintain or restore the health of a field in the absence of this essential ethical element operating through the interpersonal channel of communication.</p>
</blockquote>
<p>Ravetz emphasises the loss of this essential ethical element. In later works he notes that the new social and ethical conditions of science are reflected in a set of <a href="http://www.andreasaltelli.eu/file/repository/Maturing_Contradictions_2011_1.pdf">“emerging contradictions”</a>. These concern the cognitive dissonance between the official image of science as enlightened, egalitarian, protective and virtuous, against the current realities of scientific dogmatism, elitism and corruption; of science serving corporate interests and practices; of science used as an ersatz religion. </p>
<p>Echoes of Ravetz’s analysis can be found in many recent works, such as on the <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674046467">commodification of science</a>, or on the present problems <a href="https://theconversation.com/science-in-crisis-from-the-sugar-scam-to-brexit-our-faith-in-experts-is-fading-65016">with trust in expertise</a>. </p>
<h2>A call to arms?</h2>
<p>Ioannidis and co-authors are careful to stress the importance of a multidisciplinary approach, as both troubles and solutions may spill over from one discipline to the other. This would perhaps be a call to the arms for social scientists in general – and for those who study science itself – to tackle the crisis as a priority. </p>
<p>Here we clash with another of science’s contradictions: at this point in time, to study science as a scholar would mean to criticise its mainstream image and role. We do not see this happening any time soon. Because of the scars of “science wars” – whose spectre is <a href="https://theconversation.com/science-wars-in-the-age-of-donald-trump-67594">periodically resuscitated</a> – social scientists are wary of being seen as attacking science, or worse helping US President Donald Trump. </p>
<p>Scientists overall wish to use their <a href="https://theconversation.com/forcing-consensus-is-bad-for-science-and-society-77079">moral authority</a> and association with Enlightenment values, as seen in the recent <a href="https://theconversation.com/a-scientists-march-on-washington-is-a-bad-idea-heres-why-73305">marches for science</a>. </p>
<p>If these contradictions are real, then we are condemned to see the present crisis becoming worse before it can become better.</p><img src="https://counter.theconversation.com/content/86865/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrea Saltelli does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>We are observing two new phenomena. On one hand doubt is shed on the quality of entire scientific fields or sub-fields. On the other this doubt is played out in the open, in the media and blogosphere.Andrea Saltelli, Adjunct Professor Centre for the Study of the Sciences and the Humanities, University of Bergen, University of BergenLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/716132017-01-20T03:34:59Z2017-01-20T03:34:59ZWho will keep predatory science journals at bay now that Jeffrey Beall’s blog is gone?<figure><img src="https://images.theconversation.com/files/153532/original/image-20170119-26563-1bw4put.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The number of predatory scientific journals has exploded in recent years.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>For aficionados of bad science, the <a href="http://web.archive.org/web/20161130225402/https://scholarlyoa.com/">blog</a> of University of Colorado librarian <a href="https://en.wikipedia.org/wiki/Jeffrey_Beall">Jeffrey Beall</a> was essential reading. Beall’s blog charted the murky world of predatory and vanity academic publishers, many of which charge excessive fees for publishing papers or have dysfunctional peer review processes.</p>
<p>I’ve seen rubbish on <a href="http://web.archive.org/web/20161220104931/https://scholarlyoa.com/2016/07/14/more-fringe-science-from-borderline-publisher-frontiers/">chemtrails</a>, <a href="http://web.archive.org/web/20161220062215/https://scholarlyoa.com/2016/02/04/fringe-scientist-named-editor-in-chief-of-omics-astrobiology-journal/">alien life</a>, <a href="http://web.archive.org/web/20161207062935/https://scholarlyoa.com/2013/07/16/recognizing-a-pattern-of-problems-in-pattern-recognition-in-physics/">climate</a>, <a href="http://web.archive.org/web/20161108155658/https://scholarlyoa.com/2014/12/16/the-chinese-publisher-scirp-scientific-research-publishing-a-publishing-empire-built-on-junk-science/">HIV-AIDS</a> and <a href="http://web.archive.org/web/20150905084939/http://scholarlyoa.com/2013/07/20/omics-journal-publishes-pseudo-science-vaccine-paper/">vaccines</a> appear in these (unintended) parodies of academic publications. Although, to be honest, they can be a guilty pleasure of sorts. Perhaps I’m like a film buff getting a kick out of Ed Wood’s “<a href="http://www.imdb.com/title/tt0052077/">Plan 9 from Outer space</a>”.</p>
<p>But recently all of the content on Beall’s blog was <a href="http://www.sciencemag.org/news/2017/01/mystery-controversial-list-predatory-publishers-disappears">wiped without any warning</a>. While much of Beall’s blog is <a href="https://web.archive.org/web/20170112125427/https://scholarlyoa.com/publishers/">archived</a>, it had been charting the evolution of predatory academic publishing, including conferences and the purchasing of existing journals. With Beall’s blog gone, it will become harder to keep track of the underbelly of academic publishing.</p>
<h2>Changing face of scientific publishing</h2>
<p>Traditionally, academic journals have been sustained via subscriptions, particularly those charged to academic libraries. Libraries would pick and choose which journals to subscribe to, in large part based on the requests of academics. </p>
<p>Subscriptions provided some incentive to maintain quality but also limited the readership of academic papers, effectively excluding the broader pubic (whose taxes often funded the research).</p>
<p>As the internet enabled the easy sharing of information, this is now extending to academic publications too. The “<a href="https://theconversation.com/au/topics/open-access-1060">open access</a>” model is increasingly popular, where authors are charged publication fees and the resulting papers are freely available online.</p>
<p>In principle, I like open access, as I believe science should be disseminated to the broadest audience possible. But there are perverse incentives. Will a publisher reject a manuscript that is manifestly rubbish, and forego the fees it would charge the author? In some cases the answer is “no”.</p>
<p>Furthermore, the shift from printed journals to online publications has facilitated predatory and vanity academic publishers. Computers and websites have replaced printing presses and bound volumes. One publisher on Beall’s list, Zant World Press, is run from a <a href="https://zantworldpress.com/about-us/">Melbourne suburban house</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=498&fit=crop&dpr=1 600w, https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=498&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=498&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=625&fit=crop&dpr=1 754w, https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=625&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/153530/original/image-20170119-26585-1a18f10.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=625&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An archive of Beall’s site maintains the most recent list of suspect journals.</span>
</figcaption>
</figure>
<p>Beall’s blog charted the explosion of predatory publishers exploiting the open access model. His list grew from just 18 publishers in 2011 to <a href="http://web.archive.org/web/20170111172023/https://scholarlyoa.com/2017/01/03/bealls-list-of-predatory-publishers-2017/">1,155 publishers in 2017</a>!</p>
<p>I, along with many others, found Beall’s list an incredibly useful resource. Suspicious scientific claims could often be traced back to journals associated with publishers on the list.</p>
<p>For example, in 2015 many newspapers printed claims that chocolate helped weight loss, but it <a href="https://theconversation.com/trolling-our-confirmation-bias-one-bite-and-were-easily-sucked-in-42621">was all a hoax</a>, which included publishing a paper in the <a href="http://www.intarchmed.com/">International Archives of Medicine</a>, which was on <a href="https://archive.fo/9MAAD">Beall’s list</a>.</p>
<p>I recently became aware of another prank, played at the expense of a predatory publisher. Astronomer <a href="http://www.isdc.unige.ch/%7Edeckert/newsite/Dominique_Eckerts_Homepage.html">Dominique Eckert</a> submitted the joke paper “Get me off Your Fucking Mailing List” to <a href="http://www.iosrjournals.org/">IOSR journals</a>. The paper consists of “<a href="http://www.scs.stanford.edu/%7Edm/home/papers/remove.pdf">get me off your fucking mailing list</a>” repeated hundreds of times.</p>
<p>While one cannot fault the paper for clarity of expression, it isn’t suitable for an academic journal. But less than a week after Eckert submitted the paper, it was accepted for publication. The “reviewers comments” were “quality of manuscript is good”. Manuscript handling charges were US$75 (A$100).</p>
<p>Remarkably, this isn’t the first time a predatory publisher has accepted “Get me off Your Fucking Mailing List”. <a href="http://www.slate.com/blogs/browbeat/2014/11/24/bogus_academic_journal_accepts_paper_that_reads_get_me_off_your_fucking.html">Peter Vamplew</a> played the same prank in 2014.</p>
<p>Beall planned a post on Eckert’s prank for Thursday January 12, 2017, but it never happened. By then, all the content was wiped from the blog.</p>
<p>Why this happened isn’t yet clear. The University of Colorado says it was Beall’s <a href="http://www.sciencemag.org/news/2017/01/mystery-controversial-list-predatory-publishers-disappears">personal decision</a>. However, <a href="https://www.sspnet.org/careers/professional-profiles/lacey-earle/">Lacey Earle</a>, who has been working with Beall, tweeted that Beall “was forced to shut down blog due to threats and politics”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"821387788960591872"}"></div></p>
<p>Certainly there are many publishers and individuals who are no fans of Beall, and legal threats have been made in the past. Without a doubt, the blog has hurt some publishers’ reputations and bottom lines. </p>
<p>Indeed, Beall’s work certainly facilitated the US Federal Trade Commission charging <a href="https://www.omicsonline.org/">OMICS Group</a> with <a href="https://www.ftc.gov/news-events/press-releases/2016/08/ftc-charges-academic-journal-publisher-omics-group-deceived">deceptive acts or practices</a> in August 2016. <a href="https://www.wired.com/2016/09/ftc-cracking-predatory-science-journals/">OMICS has responded</a> and described the allegations as “baseless”.</p>
<h2>Changing times</h2>
<p>A few years ago, predatory publishing often consisted of websites with stock images and poor grammar. Sometimes journal “editors” were revealed to be <a href="http://web.archive.org/web/20170113050537/https://scholarlyoa.com/2015/07/07/predatory-journal-lists-murdered-doctor-as-its-editor-in-chief/">identities stolen off the web</a>. </p>
<p>But, increasingly, predatory publishers are running academic conferences in countries around the globe, including the <a href="http://web.archive.org/web/20161127023353/http://www.conferenceseries.com/usa-meetings/">US</a> and <a href="http://web.archive.org/web/20170115115454/http://www.conferenceseries.com/australia-meetings">Australia</a>. Often the conferences do not live up to their hype, as Radio National’s Hagar Cohen found when <a href="http://www.abc.net.au/radionational/programs/backgroundbriefing/2015-08-02/6656116">she attended</a> an OMICS conference in Brisbane in 2015.</p>
<p>Predatory publishers are also <a href="http://web.archive.org/web/20170110160651/https://scholarlyoa.com/2016/09/29/scam-publisher-omics-international-buying-legitimate-journals/">buying existing journals</a> in developed countries. Recently the <a href="http://www.amj.net.au/index.php?journal=AMJ">Australasian Medical Journal</a>’s contact details shifted from Melbourne to London, and it now shares the postal address of <a href="https://www.imedpub.com/">iMedPub</a>, an affiliate of OMICS.</p>
<p>Beall had been reporting this changing landscape of predatory publishing, and I suspect this is where the loss of his blog will have the greatest impact. That said, Beall’s archived list will long remain a valuable resource. And perhaps most importantly, he made the community aware of the threat of predatory academic publishing.</p><img src="https://counter.theconversation.com/content/71613/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael J. I. Brown receives research funding from the Australian Research Council and Monash University, and has developed space-related titles for Monash University's MWorld educational app.
</span></em></p>A leading website that monitored predatory open access journals has closed. This will make it harder to keep tabs on this corrosive force within science.Michael J. I. Brown, Associate professor, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/667812016-10-12T19:08:09Z2016-10-12T19:08:09ZNo, enjoying a gin and tonic doesn’t mean you’re a psychopath<figure><img src="https://images.theconversation.com/files/141359/original/image-20161012-8415-16ro0ck.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Don't feel bitter, but that story you read about gin was probably wrong.</span> <span class="attribution"><span class="source">Igor Normann/Shutterstock.com</span></span></figcaption></figure><p>I was looking at Facebook one evening last week when my attention was captured by the headline “<a href="http://thetab.com/uk/2016/09/05/gin-lovers-psycopaths-17675">Gin lovers are all massive psychopaths, according to experts</a>” – a somewhat disconcerting thing to read as I sipped the gin and tonic I had in my hand at the time. </p>
<p>As someone whose propensity to empathise with others has seen me spend entire evenings crying over the plight of movie characters, psychopathy has never made its way onto my list of self-diagnoses.</p>
<p>I instantly felt compelled to learn more about how a penchant for gin had become the new diagnostic tool to detect a <a href="https://theconversation.com/psychopaths-versus-sociopaths-what-is-the-difference-45047">psychopath</a>. The short story is, it hasn’t. </p>
<p>I determined this reasonably efficiently. A search for the word “gin” in the <a href="http://www.sciencedirect.com/science/article/pii/S0195666315300428">research paper</a> that prompted this news story produced a grand total of zero hits.</p>
<p>It’s therefore rather concerning that this paper has spawned a huge number of popular articles all reporting this non-existent link, such as <a href="http://www.stylist.co.uk/life/Gin-psychopath-test-alcohol-tonic-drink-truth-personality-favourite-beverage">this one that has been shared on Facebook nearly 300,000 times</a>. </p>
<p>Depending on what you read, if you’re partial to a gin and tonic you are either <a href="http://thetab.com/uk/2016/09/05/gin-lovers-psycopaths-17675">a psychopath</a>, or slightly more generously, <a href="http://www.townandcountrymag.com/leisure/drinks/news/a7758/gin-psychopath-study/">a possible psychopath</a>. </p>
<p>Other stories have cast the net a bit wider, branding <a href="http://www.huffingtonpost.ca/2015/10/14/coffee-psychopathy-study_n_8296076.html">coffee</a> and <a href="http://vinepair.com/booze-news/if-you-are-a-fan-of-ipa-science-says-youre-more-likely-to-by-psychotic/">beer</a> drinkers as potential psychopaths too – which, if you think about it, would make society a pretty scary place. </p>
<h2>Booze news</h2>
<p>These news stories are misreported accounts of <a href="http://www.sciencedirect.com/science/article/pii/S0195666315300428">research</a> from the University of Innsbruck. Across two studies, the researchers investigated the relationship between bitter taste preferences and various antisocial personality traits, including psychopathy.</p>
<p>While many tend to think of it as a disorder that afflicts only the most calculating of criminals, psychopathy is also conceptualised as a personality trait that falls along a continuum, with those at the extreme end characterised by superficial charm, callousness, and a lack of empathy.</p>
<p>The researchers measured psychopathy using a <a href="http://www.uws.edu.au/__data/assets/pdf_file/0005/227057/The_Dirty_Dozen_A_Concise_Measure_of_the_Dark_Triad.pdf">brief personality measure</a> that assesses three socially undesirable personality traits: psychopathy, <a href="https://theconversation.com/why-are-we-becoming-so-narcissistic-heres-the-science-55773">narcissism</a>, and Machiavellianism – collectively known as the “<a href="http://members.shaw.ca/ssucur/materials/02_selected_notes/06_tempest/03_PaulhusWilliams.pdf">dark triad</a>”. </p>
<p>Participants indicated their agreement with statements such as “I tend to be callous or insensitive” and “I tend to lack remorse”. Responses were then averaged to create a score for psychopathy and the other traits.</p>
<p>The researchers measured bitter taste preferences in two ways. First, participants were provided with a list of 10 bitter foods and drinks, including coffee, tonic water, beer, radishes and celery, and rated them on a scale from 1 (dislike strongly) to 6 (like strongly). These scores were then averaged to create an overall measure of bitter taste preferences for each person. The researchers also asked participants to rate their liking for bitter foods and drinks in general (as opposed to the specific examples) on the same scale.</p>
<h2>The bitter truth</h2>
<p>The results reported no significant relationship between psychopathy scores and participants’ preference scores for the specific bitter foods and drinks. That is, those with higher psychopathy scores did not display stronger overall liking for the bitter foods and drinks on the list, including tonic water, coffee and beer.</p>
<p>However, there was a weak correlation between psychopathy scores and participants’ scores on their <em>general</em> preference for bitter tastes. So you might say that people at the psychopathic end of the spectrum are slightly more likely to express a preference for eating or drinking bitter things in general. </p>
<p>How on earth do these findings translate to people who drink gin, coffee or beer being probable psychopaths? Quite simply, they don’t. </p>
<p>The study provided no evidence that an individual’s preference for specific bitter drinks like coffee, beer or tonic water (with or without gin) has any relationship with psychopathy. Even if it had, this would fall a long way short of being able to brand anyone who enjoys a G&T as a psychopath. </p>
<p>The only thing this study found was a weak positive relationship between psychopathy and a general penchant for bitter things. In my view, this link is negligible compared with other, more well established predictors of psychopathy, such as a person’s <a href="https://www.scientificamerican.com/article/secrets-criminal-mind-adrian-raine/">genes</a> or sex.</p>
<p>If you want to know whether someone is a psychopath, the truth is that most will reveal themselves soon enough, especially if you know the <a href="https://theconversation.com/not-all-psychopaths-are-criminals-some-psychopathic-traits-are-actually-linked-to-success-51282">telltale signs</a> – which don’t include whether or not they’re brandishing an aperitif.</p><img src="https://counter.theconversation.com/content/66781/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Megan Willis does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Claims that gin lovers are more likely to be psychopaths are just another case of science media misreporting - which should be a tonic to any tipplers who were worried by the news.Megan Willis, Senior Lecturer, School of Psychology, Australian Catholic UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/619162016-07-18T20:06:05Z2016-07-18T20:06:05ZWe need to talk about the bad science being funded<figure><img src="https://images.theconversation.com/files/130667/original/image-20160715-2110-t669yb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Good science loses out when bad science gets the funding.</span> <span class="attribution"><span class="source">Shutterstock/Looker Studio</span></span></figcaption></figure><p>Spectacular failures to replicate key scientific findings have been documented of late, particularly in <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0144151">biology</a>, <a href="http://science.sciencemag.org/content/349/6251/aac4716">psychology</a> and <a href="http://www.nature.com/nature/journal/v483/n7391/full/483531a.html">medicine</a>.</p>
<p>A report on the issue, published in Nature this May, found that about <a href="http://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970">90%</a> of some 1,576 researchers surveyed now believe there is a reproducibility crisis in science.</p>
<p>While this rightly tarnishes the public belief in science, it also has serious consequences for governments and philanthropic agencies that fund research, as well as the pharmaceutical and biotechnology sectors. It means they could be wasting billions of dollars on research each year.</p>
<p>One contributing factor is easily identified. It is the high rate of so-called false discoveries in the literature. They are <a href="http://www.livescience.com/32767-what-are-false-positives-and-false-negatives.html">false-positive findings</a> and lead to the erroneous perception that a definitive scientific discovery has been made. </p>
<p>This high rate occurs because the studies that are published often have <a href="http://rsos.royalsocietypublishing.org/content/1/3/140216">low statistical power to identify a genuine discovery</a> when it is there, and the effects being sought are often small.</p>
<p>Further, dubious scientific practices boost the chance of finding a statistically significant result, usually at a probability of less than one in 20. In fact, our probability threshold for acceptance of a discovery should be more stringent, just as it is for discoveries of new particles in physics. </p>
<p>The English mathematician and the father of computing <a href="http://www.biography.com/people/charles-babbage-9193834">Charles Babbage</a> noted the problem in his 1830 book <a href="https://books.google.com.au/books?id=K5BW2hsvDEQC&lpg=PA175&ots=aACXxMKOgT&dq=%E2%80%9Choaxing%2C%20forging%2C%20trimming%20and%20cooking%E2%80%9D%20charles%20babbage&pg=PA175#v=onepage&q=%E2%80%9Choaxing,%20forging,%20trimming%20and%20cooking%E2%80%9D%20charles%20babbage&f=false">Reflections on the Decline of Science in England, and on Some of Its Causes</a>. He formally split these practices into “hoaxing, forging, trimming and cooking”.</p>
<h2>‘Trimming and cooking’ the data today</h2>
<p>In the current jargon, trimming and cooking include failing to report all the data, all the experimental conditions, all the statistics and reworking the probabilities until they appear significant.</p>
<p>The frequency of many of these indefensible practices is above 50%, as <a href="http://www.psychologicalscience.org/index.php/news/releases/questionable-research-practices-surprisingly-common.html">reported by scientists themselves</a> when they are given some <a href="http://pss.sagepub.com/content/23/5/524">incentive for telling the truth</a>.</p>
<p>The English philosopher <a href="http://www.biography.com/people/francis-bacon-9194632">Francis Bacon</a> <a href="http://www.gutenberg.org/files/45988/45988-h/45988-h.htm">wrote almost 400 years ago</a> that we are influenced more by affirmation than negatives and <a href="http://www.goodreads.com/quotes/63465-man-prefers-to-believe-what-he-prefers-to-be-true">added</a>: </p>
<blockquote>
<p>Man prefers to believe what he prefers to be true.</p>
</blockquote>
<p>Deep-seated cognitive biases, consciously and unconsciously, drive scientific corner-cutting in the name of discovery.</p>
<p>This includes <a href="http://psr.sagepub.com/content/2/3/196.abstract">fiddling the primary hypothesis being tested</a> after knowing the actual results or fiddling the statistical tests, the data or both until a <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002106">statistically significant result is found</a>. Such practices are common.</p>
<p>Even large randomised controlled clinical trials published in the leading medical journals are affected (see <a href="http://compare-trials.org/">compare-trials.org</a>) – despite research plans being specified and registered before the trial starts.</p>
<p>Researchers rarely stick exactly to the plans (about 15% do). Instead, they commonly remove registered planned outcomes (which are presumably negative) and add unregistered ones (which are presumably positive).</p>
<h2>Publish or perish</h2>
<p>We do not need to look far to expose the fundamental cause for the problematic practices pervading many of the sciences. The “<a href="https://theconversation.com/publish-or-perish-culture-encourages-scientists-to-cut-corners-47692">publish or perish</a>” mantra says it all.</p>
<p>Academic progression is hindered by failure to publish in the journals controlled by peers, while it is enhanced by frequent publication of, nearly always positive, research findings. Does this sort of competitive selection sound familiar? </p>
<p>It is a form of cultural natural selection – natural, in that it is embedded in the modern culture of science, and selective in that only survivors progress. The parallels between biological natural selection and selection related to culture have long been accepted. Charles Darwin even described its role in development of language in his <a href="http://www.goodreads.com/book/show/185407.The_Descent_of_Man">The Descent of Man</a> (1871). </p>
<p>Starkly put, the rate of publication varies between scientists. Scientists who publish at a higher rate are preferentially selected for positions and promotions. Such scientists have “children” who establish new laboratories and continue the publication practices of the parent.</p>
<h2>Good science suffers</h2>
<p>In another <a href="https://arxiv.org/abs/1605.09511">study published in May</a>, researchers modelled the intuitive but complex interactions between the pressure and effort to publish new findings and the need to replicate them to nail down true discoveries. It is a well-argued simulation of the operation and culture of modern science. </p>
<p>They also conclude that there is natural selection for bad scientific practice because of incentives that simply reward “publication quantity”:</p>
<blockquote>
<p>Scrupulous research on difficult problems may require years of intense work before yielding coherent, publishable results. If shallower work generating more publications is favored, then researchers interested in pursuing complex questions may find themselves without jobs, perhaps to the detriment of the scientific community more broadly.</p>
</blockquote>
<p>The authors also reiterate the low power of many studies to find a phenomenon if it was truly there. Despite entreaties to increase statistical power, for example by collection of more observations, it has remained consistently low for the last 50 years.</p>
<p>In some fields, it <a href="http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html">averages only 20% to 30%</a>. Natural academic selection has favoured publication of a result, rather than generation of new knowledge.</p>
<p>The impact of Darwinian selection among scientists is amplified when government support for science is low, growth in the scientific literature continues unabated, and universities produce an increasing number of PhD graduates in science.</p>
<p>We hold an idealised view that science is rarely fallible, particularly biology and medicine. Yet many fields are filled with publications of low-powered studies with perhaps <a href="http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124">the majority being wrong</a>.</p>
<p>This problem requires action from scientists, their teachers, their institutions and governments. We will not turn natural selection around but we need to put in place selection pressures for getting the right answer rather than simply published.</p><img src="https://counter.theconversation.com/content/61916/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Simon Gandevia receives funding from the National Health and Medical Research Council.</span></em></p>New studies on the quality of published research shows we could be wasting billions of dollars a year on bad science, to the neglect of good science projects.Simon Gandevia, Deputy Director, Neuroscience Research AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/522022016-01-14T19:22:17Z2016-01-14T19:22:17ZHow not to write about science<figure><img src="https://images.theconversation.com/files/107723/original/image-20160111-7002-1b9x8q8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">This is what happens when science writing gets too turgid.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Amid the many calls for scientists to <a href="https://theconversation.com/when-too-much-science-communication-is-barely-enough-38277">engage with the general public</a>, there are some who feel that scientists ought to remain aloof and disconnected from the broader public. </p>
<p>They believe academics <a href="http://www.theguardian.com/higher-education-network/2015/dec/10/academics-forget-about-public-engagement-stay-in-your-ivory-towers">shouldn’t even attempt</a> to communicate their research to common folk. And many scientists oblige them, by writing in a turgid manner that is highly effective at keeping the public (and their peers) at bay.</p>
<p>So, here are a few of the tricks that scientists use to produce such turgid science writing. These methods restrict science to the smallest and most specialist audience possible. </p>
<p>But writers beware! Stray from these methods and you risk finding an audience for your writing.</p>
<h2>What was done by whom?</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=565&fit=crop&dpr=1 754w, https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=565&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/105602/original/image-20151213-9092-folq06.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=565&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Keeping yourself out of the picture is an old-fashioned way of reducing interest in science.</span>
<span class="attribution"><span class="source">Windell Oskay/flicr</span></span>
</figcaption>
</figure>
<p>You probably already know of journalists’ penchant for “who, what, where, when, why and how”. These are the essentials for creating a captivating story (at least according to journalists). But for scientists who want to remain in the ivory tower, a good start is dropping the “who.” </p>
<p>Hence the passive “it was found that…” rather than the active “I found…” or “scientists discovered…”. Excessive use of such <a href="http://writingcenter.unc.edu/handouts/passive-voice/">passive voice</a> can easily drain the agency and sparkle from science writing.</p>
<p>This depopulated style <a href="http://adsabs.harvard.edu/cgi-bin/nph-abs_connect?db_key=AST&db_key=PRE&qform=AST&arxiv_sel=astro-ph&arxiv_sel=cond-mat&arxiv_sel=cs&arxiv_sel=gr-qc&arxiv_sel=hep-ex&arxiv_sel=hep-lat&arxiv_sel=hep-ph&arxiv_sel=hep-th&arxiv_sel=math&arxiv_sel=math-ph&arxiv_sel=nlin&arxiv_sel=nucl-ex&arxiv_sel=nucl-th&arxiv_sel=physics&arxiv_sel=quant-ph&arxiv_sel=q-bio&sim_query=YES&ned_query=YES&adsobj_query=YES&aut_logic=OR&obj_logic=OR&author=&object=&start_mon=&start_year=1930&end_mon=&end_year=1931&ttl_logic=OR&title=&txt_logic=OR&text=&nr_to_return=200&start_nr=1&jou_pick=ALL&ref_stems=ApJ&data_and=ALL&group_and=ALL&start_entry_day=&start_entry_mon=&start_entry_year=&end_entry_day=&end_entry_mon=&end_entry_year=&min_score=&sort=SCORE&data_type=SHORT&aut_syn=YES&ttl_syn=YES&txt_syn=YES&aut_wt=1.0&obj_wt=1.0&ttl_wt=0.3&txt_wt=3.0&aut_wgt=YES&obj_wgt=YES&ttl_wgt=YES&txt_wgt=YES&ttl_sco=YES&txt_sco=YES&version=1">was once the norm</a> in many academic journals but even bastions of science such as Nature prefer the <a href="http://www.nature.com/authors/author_resources/how_write.html">active voice</a>. No longer should scientists write themselves out of their own manuscripts. </p>
<p>That said, a few funding agencies and journals still encourage the old style of science writing. For example, in hundreds of ARC Discovery Project summaries the word “we” occurs <a href="https://rms.arc.gov.au/RMS/Report/Download/Report/a3f6be6e-33f7-4fb5-98a6-7526aaa184cf/5">a mere 30 times</a>. I’ve even seen guides for students encouraging the use of the passive voice. Nice to see that universities’ devotion to old traditions isn’t limited to dull lectures and silly graduation garments. </p>
<h2>What’s a picture worth?</h2>
<p>A scientist writing about science may well be forced to use images and plots. This obviously presents a risk of clear and concise means of communication. A picture is worth a thousand words? Wrong!</p>
<p>The key to unlocking a science image or plot is often in the caption. I can show you a plot of <a href="http://supernova.lbl.gov/union/figures/Union2.1_Hubble_slide.pdf">supernovae distances and velocities</a>, but if you are unfamiliar with the plot and its conclusions it may tell you nothing. It’s Nobel Prize-winning significance can remain hidden from view. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=473&fit=crop&dpr=1 600w, https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=473&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=473&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=595&fit=crop&dpr=1 754w, https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=595&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/107720/original/image-20160111-7009-12gy0oo.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=595&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">But what does it mean?</span>
<span class="attribution"><a class="source" href="http://supernova.lbl.gov/union/">Supernova Cosmology Project</a></span>
</figcaption>
</figure>
<p>A caption can tell you what to look for, warn you about subtleties in the image, or just tell you what the axes represent. A poorly worded caption can guarantee that a picture tells far less than a thousand words. Alternatively, an overly long caption can bury key points in a wall of text. </p>
<p>And there are even more ways of keeping science out of the limelight with images and plots. Some scientists choose font sizes, symbols and colours that don’t work well when viewed on a screen. More than a dash of clutter can stymie insight too. That can reduce the chance that images are understood by an increasingly small audience. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/105601/original/image-20151213-30725-18pa4wd.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">This image could tell you a lot about galaxies, but not with this perfunctory caption.</span>
<span class="attribution"><span class="source">Michael Brown / SDSS</span></span>
</figcaption>
</figure>
<h2>Language</h2>
<p>There are all sorts of ways scientists can hinder communication by misusing language. Unnecessary jargon and acronyms (UJAA) are an obvious starting point. Indeed, a <a href="http://news.stanford.edu/news/2015/november/fraud-science-papers-111615.html">recent study</a> found that scientists committing fraud use more jargon than other scientists, presumably to obscure true understanding of their “research”.</p>
<p>Scientists can also water down the impact of their work with excessively cautious language. Or perhaps, it is possible they might potentially water down any likely impact of their preliminary study with language that could in some circumstances be consistent with excessive caution. </p>
<p>Scientists can antagonise their audiences too. Stating something is “obvious” or “clear” without any quantitative analysis is a good start. They may even want to ignore their data, so the text doesn’t match the analysis. Scientists may be pleasantly surprised at how often <a href="https://theconversation.com/peer-review-isnt-perfect-and-the-media-doesnt-always-help-11318">they can get away with this</a>. </p>
<h2>What I did on my science</h2>
<p>An incredible labour-saving device is a slavish devotion to chronology. Some science writers don’t organise and synthesise, but just doggedly follow the time line. You may be familiar with this writing style from primary school essays, such as the timeless classic “what I did on my holiday”. </p>
<p>The pursuit of science <a href="http://undsci.berkeley.edu/lessons/pdfs/how_science_works.pdf">is not particularly linear</a>. There are methodological dead ends, repeated analyses, new questions and the random arrival of genuine insights. With the benefit of hindsight, a researcher would invariably do things differently, but they don’t need to share that hindsight with others. </p>
<p>Rather than summarising methodological dead ends, pages can be devoted to them, despite their marginal benefit to others. A slavish devotion to chronology allows scientists to get bogged down in method, rather than distractions such as motivations and findings. </p>
<p>Scientists can scatter the fundamental questions and key insights throughout their writing (ideally in the middle of paragraphs), which will then be overlooked by all but the most dedicated readers. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/105604/original/image-20151213-30725-9xvudl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">By being slavishly chronological, you can get bogged down in method and reduce the organisation of your science writing.</span>
<span class="attribution"><span class="source">J Mark Dodds/flickr</span></span>
</figcaption>
</figure>
<p>With these simple techniques scientists can resist the siren call of public engagement. Interest and insight can be avoided, keeping the public at arm’s length. </p>
<p>Indeed, with sufficient devotion to this turgid and disorganised writing style, scientists may even keep interest and insight hidden from themselves.</p><img src="https://counter.theconversation.com/content/52202/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Michael J. I. Brown receives research funding from the Australian Research Council and Monash University, and has developed space-related titles for Monash University's MWorld educational app.
</span></em></p>Science can be fascinating and exciting. But much science writing is dull and obscure. Here are some of the tricks scientists often use to suck the joy out of science.Michael J. I. Brown, Associate professor, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/454542015-07-30T11:57:50Z2015-07-30T11:57:50ZHere’s why scientists haven’t invented an impossible space engine – despite what you may have read<figure><img src="https://images.theconversation.com/files/90305/original/image-20150730-25757-1xmnatt.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>What if I told you that recent experiments have revealed a revolutionary new method of propulsion that threatens to overthrow the laws of physics as we know them? That its inventor claims it could allow us to travel to the Moon in four hours without the use of fuel? What if I then told you we cannot explain exactly how it works and, in fact, there are some very good reasons why it shouldn’t work at all? I wouldn’t blame you for being sceptical.</p>
<p>The somewhat fantastical EMDrive (short for Electromagnetic Drive) recently returned to the public eye after an academic claimed to have recorded the drive producing measurable thrust. The experiments from Professor Martin Tajmar’s group at the Dresden University of Technology have <a href="http://www.telegraph.co.uk/news/science/space/11769030/Impossible-rocket-drive-works-and-could-get-to-Moon-in-four-hours.html">spawned numerous</a> <a href="http://www.dailymail.co.uk/sciencetech/article-3177449/Nasa-s-impossible-fuel-free-thrusters-work-German-scientists-confirm-viability-super-fast-space-travel-slash-journey-moon-4-HOURS.html">overexcited headlines</a> making claims that –- let’s be very clear here –- are not supported by the science.</p>
<p>The idea for the EMDrive was <a href="http://www.emdrive.com">first proposed</a> by Roger Shawyer in 1999 but, tellingly, he has only recently published <a href="http://www.sciencedirect.com/science/article/pii/S0094576515002726?np=y">any work</a> on it in a peer-reviewed scientific journal, and a rather obscure one at that. Shawyer claims his device works by bouncing microwaves around inside a conical cavity. According to him, the taper of the cavity creates a change in the group velocity of the microwaves as they move from one end to the other, which leads to an unbalanced force, which then translates into a thrust. If it worked, the EMDrive would be a propulsion method unlike any other, requiring no propellant to produce thrust.</p>
<h2>Fundamental problems</h2>
<p>There is, of course, a flaw in this idea. The design instantly violates the principle of <a href="http://www.physicsclassroom.com/class/momentum/Lesson-2/Momentum-Conservation-Principle">conservation of momentum</a>. This states the total momentum (mass x velocity) of objects in a system must remain the same and is linked to Newton’s Third Law. Essentially, for an object to accelerate in one direction, there must be an equal force directed the opposite way. In the case of engines, this usually means firing out particles (such as propellant) or radiation.</p>
<p>The EMDrive is designed to be a closed system that doesn’t emit any particles or radiation. It cannot possibly generate any thrust without breaking some seriously fundamental laws of physics. To put it bluntly, it’s like trying to pull yourself up by your shoelaces and hoping you’ll levitate.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/90309/original/image-20150730-25777-1bhdjjj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">From Earth to the Moon in four hours? Still impossible.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Nonetheless, a few open-minded experimental groups have built prototype EMDrives and all seem to see it generate some form of thrust. This has led to a lot of excitement. Maybe the laws of physics as we know them are wrong?</p>
<p>Eagleworks, a NASA-based group, built a prototype and <a href="http://ntrs.nasa.gov/search.jsp?R=20140006052">last year reported</a> 30-50 micronewtons of thrust that could not be explained by any conventional theory. This work was not peer-reviewed. Now, Tajmar’s group in Dresden say they have built a new version of the EMDrive <a href="http://arc.aiaa.org/doi/abs/10.2514/6.2015-4083">and detected</a> 20 micronewtons of thrust. This is a much smaller value, but still significant if it really is generated by some new principle.</p>
<h2>Experimental problems</h2>
<p>Straightaway, there are problems with this experiment. The abstract states: “Our test campaign cannot confirm or refute the claims of the EMDrive.” Then, a careful reading of the paper reveals this observation: “The control experiment actually gave the biggest thrust … We were really puzzled by this large thrust from our control experiment where we expected to measure zero.”</p>
<p>Yes, the control experiment designed not to generate any thrust still measures a thrust. Then there’s the peculiar gradual way the thrust seems to turn on and off that looks suspiciously like a thermal effect, and then there are acknowledged heating problems. All this leads to the conclusion stated in the paper that “such a set-up does not seem to be able to adequately measure precise thrusts.” Similar problems were seen by the Eagleworks group, with thrust also mysteriously appearing in their control test.</p>
<p>Taken together, these results strongly suggest that the measured signatures of thrust are subtle experimental errors. Possible sources include thermal effects, problems with magnetic shielding or even a non-uniform gravitational field in the laboratory leading to erroneous force measurements. As a comparison, the force measured in this latest experiment is roughly comparable to the gravitational attraction between two average-sized people (100kg) standing about 15cm apart. It is an extremely small force. </p>
<p>That the experiments detect a measureable thrust is undeniable. Where the thrust comes from, whether it is real or erroneous, is inconclusive. That the experiments in any way confirm the EMDrive works is a falsehood. This was noted by Tajmar himself, who told the <a href="http://www.ibtimes.co.uk/emdrive-dr-martin-tajmar-generates-thrust-test-controversial-space-propulsion-technology-1513151">International Business Times</a> “I believe there is no real news here yet.”</p>
<p>The experimental scientists involved have done their jobs to the best of their ability, having tested a hypothesis – albeit a spectacularly unlikely one – and reported their results. These scientists aren’t actually claiming to have invented a warp drive or to have broken the laws of physics. All they’re saying at the moment is that they’ve found something odd and unexplained that might be something new but is likely an experimental artefact that needs further study. The panoply of clickbait headlines and poorly researched articles on the topic are doing something of a disservice to their scientific integrity by claiming otherwise.</p><img src="https://counter.theconversation.com/content/45454/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Steven Thomson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Reported “evidence” that the proposed fuel-free “EmDrive” works (and breaks the known laws of physics) is nothing of the sort.Steven Thomson, PhD candidate in condensed matter theory, University of St AndrewsLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/426212015-06-02T06:24:48Z2015-06-02T06:24:48ZTrolling our confirmation bias: one bite and we’re easily sucked in<figure><img src="https://images.theconversation.com/files/83617/original/image-20150602-6987-hetxxi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What? Eating chocolate doesn't help lose weight? But I read it in the newspaper!</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/anjuli_ayer/2950226374/in/photostream/">anjuli_ayer/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span></figcaption></figure><p>Earlier in the year the world was finally treated to some good news from science: a report was published that claimed to show that <a href="http://imed.pub/ojs/index.php/iam/article/view/1087/728">eating chocolate could help you lose weight faster</a>. </p>
<p>Although it all seemed too good to be true, the story was reported in news outlets around the world. Europe’s largest daily newspaper, Bild, ran it on the front page. It made TV news in Australia and the <a href="http://www.ktre.com/story/28964908/study-chocolate-helps-weight-loss">US</a>, it landed on the <a href="http://www.irishexaminer.com/examviral/science-world/scientists-say-eating-chocolate-can-help-you-lose-weight-321189.html">Irish Examiner</a>, <a href="http://timesofindia.indiatimes.com/life-style/health-fitness/diet/need-a-sweeter-way-to-lose-weight-eat-chocolates/articleshow/46770172.cms">The Times of India</a>, and the Huffington Post in <a href="http://videos.huffingtonpost.de/lifestyle/macht-schokolade-etwa-schlank-neue-studie-schokolade-hilft-beim-abnehmen_id_4577004.html">various</a> <a href="http://www.huffingtonpost.in/2015/05/29/chocolate-weight-loss_n_6975422.html">languages</a>.</p>
<p>But it <em>was</em> too good to be true. Or, if you’re an aficionado of the work of trolls, it was even better.</p>
<p>Last week science journalist John Bohannon <a href="http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800">revealed</a> that the whole study was an elaborate prank, a piece of terrible science he and documentary film makers Peter Onneken and Diana Löbl – with general practitioner Gunter Frank and financial analyst Alex Droste-Haars – had set up to reveal the corruption at the heart of the “diet research-media complex”.</p>
<h2>Terrible science</h2>
<p>So what did they do? Bohannon and his team went through all the standard practices of science. But at every stage they chose methods they knew would lead not to truth, but to clickbaity headlines.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=1045&fit=crop&dpr=1 600w, https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=1045&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=1045&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1313&fit=crop&dpr=1 754w, https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1313&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/83621/original/image-20150602-7003-y0t4tr.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1313&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Has the Daily Star gone coco?</span>
<span class="attribution"><span class="source">Screenshot by author</span></span>
</figcaption>
</figure>
<p>To begin the study they recruited a tiny sample of 15 people willing to go on a diet for three weeks. They divided the sample into three groups: one followed a low carbohydrate diet; another followed that diet but also got a 42 gram bar of chocolate every day; and finally the control group were asked to make no changes to their regular diet. </p>
<p>Throughout the experiment the researchers measured the participants in 18 different ways, including their weight, cholesterol, sodium, blood protein levels, their sleep quality and their general well being.</p>
<p>And here’s their first trick. Measuring such a tiny sample in so many ways means you’re almost bound to find something vaguely reportable. As Bohannon <a href="http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800">explains it</a>:</p>
<blockquote>
<p>Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out — the headline could have been that chocolate improves sleep or lowers blood pressure — but we knew our chances of getting at least one “statistically significant” result were pretty good.</p>
</blockquote>
<p>And so then they submitted it for publication. But again, Bohannon chose the path that led away from truth, picking a journal from his <a href="http://www.sciencemag.org/content/342/6154/60/suppl/DC1">extensive list of open access academic journals</a> (more on this below). Although the journal, (<a href="http://imed.pub/ojs/index.php/iam/index">International Archives of Medicine</a>), looks somewhat like a real academic journal, there was no <a href="https://theconversation.com/au/topics/peer-review">peer review</a>. It was accepted within 24 hours, and published two weeks later. </p>
<h2>But great publicity!</h2>
<p>Practiced in the white magic of science journalism and familiar with the dark arts of science PR, Bohannon then <a href="http://instituteofdiet.com/2015/03/29/international-press-release-slim-by-chocolate/">whipped up a press release</a> he knew would bait the world’s media. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=812&fit=crop&dpr=1 600w, https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=812&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=812&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1021&fit=crop&dpr=1 754w, https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1021&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/83622/original/image-20150602-6967-1xaak8r.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1021&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hmm, chocolate study?</span>
<span class="attribution"><span class="source">Screenshot by author</span></span>
</figcaption>
</figure>
<p>The key, Bohannon stated, was to “exploit journalists’ incredible laziness” – to write the press release so that reporters had the story laid out on a plate for them, as it were. As he later wrote, he “felt a queazy mixture of pride and disgust as our lure zinged out into the world”. And a great many swallowed it whole.</p>
<p>Headlines around the world screamed <a href="http://www.dailystar.co.uk/diet-fitness/433688/chocolate-diet-how-to-lose-weight">Has the world gone coco? Eating chocolate can help you LOSE weight</a>, <a href="http://timesofindia.indiatimes.com/life-style/health-fitness/diet/need-a-sweeter-way-to-lose-weight-eat-chocolates/articleshow/46770172.cms">Need a ‘sweeter’ way to lose weight? Eat chocolates!</a> and, perhaps more boringly, <a href="http://www.ktre.com/story/28964908/study-chocolate-helps-weight-loss">Study: Chocolate helps weight loss</a>.</p>
<p>Some of these reports remain online today in the same state as they were published, although some outlets, such as <a href="http://www.cosmopolitan.de/abnehm-studie-schokolade-laesst-die-pfunde-purzeln-64990.html">Cosmopolitan Germany</a> and <a href="http://www.huffingtonpost.in/2015/05/29/chocolate-weight-loss_n_6975422.html">Huffington Post India</a>, have since updated to reveal the sting. The <a href="https://www.youtube.com/watch?v=YrC9YcyIuOE">Australian TV news piece has been deleted</a>, like the mistake never happened. </p>
<h2>What’s the washup?</h2>
<p>The reporters around the world who cut-and-pasted Bohannon’s press release certainly aren’t blameless. None did the due diligence – such as looking at the journal, looking for details about the number of study participants, or even looking for the <a href="http://instituteofdiet.com/">institute</a> Bohannon claimed to work for (which exists only as a website) – that was necessary to find out if the study was legitimate. </p>
<p>But if we’re really looking to find fault here, we’ve got to cast our net a bit wider. As Bohannon and his colleagues noted, there’s a “diet research-media complex” here that’s almost rotten to the core.</p>
<p>From beginning to end we’ve got a system with almost as much scope for corruption as a BBQ at a high ranking FIFA official’s house:</p>
<ul>
<li><p>we’ve got researchers around the world who have taken to heart the dictum that the <a href="https://theconversation.com/our-obsession-with-metrics-is-corrupting-science-39378">quantity of research outputs</a> is more important than the quality</p></li>
<li><p>we’ve got journal publishers at the high quality end that <a href="http://www.michaeleisen.org/blog/?p=1439">care about media impact more than facts</a></p></li>
<li><p>we’ve got journal publishers at the no-quality end who exploit the desperation of researchers by offering the semblance of publication for a modest sum</p></li>
<li><p>we’ve got media outlets pushing their journalists ever harder to fill our eyeballs with clickbaity and sharebaity content, <a href="http://tktk.gawker.com/my-year-ripping-off-the-web-with-the-daily-mail-online-1689453286">regardless of truth</a></p></li>
<li><p>and we’ve got us: simple creatures prone to click, read and share the things that appeal to our <a href="https://theconversation.com/the-10-stuff-ups-we-all-make-when-interpreting-research-30816">already existing biases</a> and baser selves.</p></li>
</ul>
<h2>Not the heroes we want</h2>
<p>In the stories they tell about themselves, scientists, journalists and popular and scholarly publishers share a common dogma: that they’re heroes in the pursuit of truth. This may be true as individuals, but the pressures of their respective industries distort them in ways which can be utterly cyncial. </p>
<p>And so it’s interesting that Bohannon has pulled a similar prank to this before, <a href="http://www.michaeleisen.org/blog/?p=1439">submitting a deeply flawed paper</a> on possible cancer inhibiting molecules to a plethora of different journals, with many accepting the paper with nary a comment. </p>
<p>We should, perhaps, look at work like this in the abstract – as a form of trolling to expose the self serving, the cynical and the corrupt. Trolls like Bohannon may not be the heroes we want, but they’re the heroes this dirty world of ours needs.</p><img src="https://counter.theconversation.com/content/42621/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Will J Grant does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A recent hoax study suggesting chocolate helps people lose weight highlights many problems with the way science is conducted and reported by the media.Will J Grant, Researcher / Lecturer, Australian National Centre for the Public Awareness of Science, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/298042014-08-04T05:21:53Z2014-08-04T05:21:53ZWhen ‘exciting’ trumps ‘honest’, traditional academic journals encourage bad science<figure><img src="https://images.theconversation.com/files/55416/original/b6byzpzs-1406802048.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">One more corner, then I'll answer your questions. </span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/campuspartymexico/4883943564/sizes/l">campuspartymexico</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>Imagine you’re a scientist. You’re interested in testing the hypothesis that playing violent video games makes people more likely to be violent in real life. This is a straightforward theory, but there are still many, many different ways you could test it. First you have to decide which games count as “violent”. Does Super Mario Brothers count because you kill Goombas? Or do you only count “realistic” games like Call of Duty? Next you have to decide how to measure violent behaviour. Real violence is rare and difficult to measure, so you’ll probably need to look at <a href="http://www.ncbi.nlm.nih.gov/pubmed/23097053">lower-level “aggressive” acts</a> – but which ones?</p>
<p>Any scientific study in any domain from astronomy to biology to social science contains countless decisions like this, large and small. On a given project a scientist will probably end up trying many different permutations, generating masses and masses of data.</p>
<p>The problem is that in the final published paper – the only thing you or I ever get to read – you are likely to see only one result: the one the researchers were looking for. This is because, in my experience, scientists often leave complicating information out of published papers, especially if it conflicts with the overall message they are trying to get across.</p>
<p>In a <a href="http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005738#s3">large recent study</a> around a third of scientists (33.7%) admitted to things like dropping data points based on a “gut feeling” or selectively reporting results that “worked” (that showed what their theories predicted). About 70% said they had seen their colleagues doing this. If this is what they are prepared to admit to a stranger researching the issue, the real numbers are probably much, much higher.</p>
<p>It is almost impossible to overstate how big a problem this is for science. It means that, looking at a given paper, you have almost no idea of how much the results genuinely reflect reality (hint: <a href="http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124">probably not much</a>).</p>
<h2>Pressure to be interesting</h2>
<p>At this point, scientists probably sound pretty untrustworthy. But the scientists aren’t really the problem. The problem is the way science research is published. Specifically the pressure all scientists are under to be interesting.</p>
<p>This problem comes about because science, though mostly funded by taxpayers, is published in <a href="https://theconversation.com/the-great-publishing-swindle-the-high-price-of-academic-knowledge-6667">academic journals you have to pay to read</a>. Like newspapers, these journals are run by private, for-profit companies. And, like newspapers, <a href="http://www.theguardian.com/higher-education-network/blog/2013/feb/11/science-research-crisis-retraction-replicability">they want to publish the most interesting, attention-grabbing articles</a>. </p>
<p>This is particularly true of <a href="http://www.theguardian.com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-science">the most prestigious journals</a> like Science and Nature. What this means in practice is that journals don’t like to publish negative or mixed results – studies where you predicted you would find something but actually didn’t, or studies where you found a mix of conflicting results.</p>
<p>Let’s go back to our video game study. You have spent months conducting a rigorous investigation but, alas, the results didn’t quite turn out as your theory predicted. Ideally, this shouldn’t be a problem. If your methods were sound, your results are your results. Publish and be damned, right? But here’s the rub. The top journals won’t be interested in your boring negative results, and being published in these journals has a huge impact on your future career. What do you do?</p>
<p>If your results are unambiguously negative, there is not much you can do. Foreseeing long months of re-submissions to increasingly obscure journals, you consign your study to the <a href="http://www.psychfiledrawer.org/TheFiledrawerProblem.php">file-drawer</a> for a rainy day that will likely never come.</p>
<p>But if your results are less clear-cut? What if some of them suggest your theory was right, but some don’t? Again, you could struggle for months or years, scraping the bottom of the journal barrel to find someone to publish the whole lot. </p>
<p>Or you could “simplify”. After all, most of your results are in line with your predictions, so your theory is probably right. Why not leave those “aberrant” results out of the paper? There is probably a good reason why they turned out like that. Some anomaly. Nothing to do with your theory really.</p>
<p>Nowhere in this process do you feel like you are being deceptive. You just know what type of papers are easiest to publish, so you chip off the “boring” complications to achieve a clearer, more interesting picture. Sadly, the complications are probably closer to messy reality. The picture you publish, while clearer, is much more likely to be wrong.</p>
<p>Science is supposed to have a mechanism for correcting these sorts of errors. It is called replication, and it is one of the cornerstones of the scientific method. Someone else replicates what you did to see if they get the same results. Unfortunately, replication is another thing the science journals consider “boring” – <a href="https://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998">so no one is doing it anymore</a>. You can publish your tweaked and nudged and simplified results, safe in the knowledge that no-one will ever try exactly the same thing again and find something different.</p>
<p>This has enormous consequences for the state of science as a whole. When we ask “Is drug A effective for disease B?” or “Is policy X a good idea?”, we are looking at a body of evidence that is drastically incomplete. Crucially, it is <a href="http://www.theguardian.com/commentisfree/2014/jan/05/scandal-drugs-trials-withheld-doctors-tamiflu">missing a lot of studies that said “No, it isn’t”</a>, and includes a lot of studies which should have said “Maybe yes, maybe no”, but actually just say “Yes”. </p>
<p>We are making huge, life-altering decisions on the basis of bad information. All because we have created a system which treats scientists like journalists; which tells them to give us what is interesting instead of what is true.</p>
<h2>Publish more papers, even boring ones</h2>
<p>This seems like a big, abstract, hard-to-fix problem. But we actually have a solution right in front of us. All we have to do is continue changing the scientific publishing model so it no longer has anything to do with “interest” and is more open to publishing everything, as long as the methodology is sound. <a href="http://en.wikipedia.org/wiki/Open_access">Open Access</a> journals like <a href="http://en.wikipedia.org/wiki/Plos_ONE">PLOS ONE</a> already do this. They publish everything they receive that is methodologically sound, whether it is straightforward or messy, headline-grabbing or mind-numbingly boring.</p>
<p>Extending this model to every academic journal would, at a stroke, remove the single biggest incentive for scientists to hide inconvenient results. The main objection to this is that the resulting morass of published articles <a href="https://theconversation.com/how-science-can-beat-the-flawed-metric-that-rules-it-29606">would be tough to sort through</a>. But this is the internet age. We have become <a href="http://www.theatlantic.com/technology/archive/2014/01/how-netflix-reverse-engineered-hollywood/282679/">past masters at sorting through masses of crap to get to the good stuff</a> – the internet itself would be unusable if we weren’t.</p><img src="https://counter.theconversation.com/content/29804/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert de Vries does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Imagine you’re a scientist. You’re interested in testing the hypothesis that playing violent video games makes people more likely to be violent in real life. This is a straightforward theory, but there…Robert de Vries, Associate fellow in Sociology, University of OxfordLicensed as Creative Commons – attribution, no derivatives.