tag:theconversation.com,2011:/ca/topics/scientific-method-1820/articlesScientific method – The Conversation2023-01-30T19:22:57Ztag:theconversation.com,2011:article/1984632023-01-30T19:22:57Z2023-01-30T19:22:57ZUnlike with academics and reporters, you can’t check when ChatGPT’s telling the truth<figure><img src="https://images.theconversation.com/files/506273/original/file-20230125-18-6yb4mn.jpg?ixlib=rb-1.1.0&rect=0%2C17%2C5736%2C3790&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Being able to verify how information is produced is important, especially for academics and journalists.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><iframe style="width: 100%; height: 100px; border: none; position: relative; z-index: 1;" allowtransparency="" allow="clipboard-read; clipboard-write" src="https://narrations.ad-auris.com/widget/the-conversation-canada/unlike-with-academics-and-reporters--you-can-t-check-when-chatgpt-s-telling-the-truth" width="100%" height="400"></iframe>
<p>Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses to natural-language queries, few have matched those of educators and academics.</p>
<p>Academic publishers have moved <a href="https://www.theguardian.com/science/2023/jan/26/science-journals-ban-listing-of-chatgpt-as-co-author-on-papers">to ban ChatGPT from being listed as a co-author and issue strict guidelines outlining the conditions under which it may be used</a>. Leading universities and schools around the world, from France’s renowned <a href="https://www.reuters.com/technology/top-french-university-bans-use-chatgpt-prevent-plagiarism-2023-01-27/">Sciences Po</a> to <a href="https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays">many Australian universities</a>, have banned its use. </p>
<p>These bans are not merely the actions of academics who are worried they won’t be able to catch cheaters. This is not just about catching students who copied a source without attribution. Rather, the severity of these actions reflects a question, one that is not getting enough attention in the endless coverage of OpenAI’s ChatGPT chatbot: Why should we trust anything that it outputs?</p>
<p>This is a vitally important question, as ChatGPT and programs like it can easily be used, with or without acknowledgement, in the information sources that comprise the foundation of our society, especially academia and the news media.</p>
<p>Based on my work on the <a href="https://doi.org/10.4324/9781003008309">political</a> <a href="https://doi.org/10.1007/978-3-030-14540-8">economy</a> of <a href="https://utorontopress.com/9781442666221/copyfight/">knowledge governance</a>, academic bans on ChatGPT’s use are a proportionate reaction to the threat ChatGPT poses to our entire information ecosystem. Journalists and academics should be wary of using ChatGPT. </p>
<p>Based on its output, ChatGPT might seem like just another information source or tool. However, in reality, ChatGPT — or, rather the means by which ChatGPT produces its output — is <a href="https://www.cigionline.org/articles/chatgpt-strikes-at-the-heart-of-the-scientific-world-view/">a dagger aimed directly at their very credibility as authoritative sources of knowledge</a>. It should not be taken lightly.</p>
<h2>Trust and information</h2>
<p>Think about why we see some information sources or types of knowledge as more trusted than others. Since <a href="https://www.britannica.com/event/Enlightenment-European-history">the European Enlightenment</a>, we’ve tended to equate scientific knowledge with knowledge in general. </p>
<p>Science is more than laboratory research: it’s a way of thinking that prioritizes empirically based evidence and the pursuit of transparent methods regarding evidence collection and evaluation. And it tends to be the gold standard by which all knowledge is judged.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/ai-and-the-future-of-work-5-experts-on-what-chatgpt-dall-e-and-other-ai-tools-mean-for-artists-and-knowledge-workers-196783">AI and the future of work: 5 experts on what ChatGPT, DALL-E and other AI tools mean for artists and knowledge workers</a>
</strong>
</em>
</p>
<hr>
<p>For example, journalists have credibility because they investigate information, cite sources and provide evidence. Even though sometimes the reporting may contain errors or omissions, that doesn’t change the profession’s authority.</p>
<p>The same goes for opinion editorial writers, especially academics and other experts because they — we — draw our authority from our status as experts in a subject. Expertise involves a command of the sources that are recognized as comprising legitimate knowledge in our fields. </p>
<p>Most op-eds aren’t citation-heavy, but responsible academics will be able to point you to the <a href="https://www.bloomsbury.com/ca/states-and-markets-9781474236935/">thinkers</a> and <a href="https://www.jstor.org/stable/10.5325/jinfopoli.7.2017.0176">the work</a> <a href="https://www.penguinrandomhouse.com/books/12390/the-social-construction-of-reality-by-peter-l-berger/">they’re</a> <a href="https://doi.org/10.24908/ss.v12i2.4776">drawing</a> <a href="https://doi.org/10.1080/1369118X.2012.678878">on</a>. And those sources themselves are built on verifiable sources that a reader should be able to verify for themselves.</p>
<h2>Truth and outputs</h2>
<p>Because human writers and ChatGPT seem to be producing the same output — sentences and paragraphs — it’s understandable that some people may mistakenly confer this scientifically sourced authority onto ChatGPT’s output. </p>
<p>That both ChatGPT and reporters produce sentences is where the similarity ends. What’s most important — the source of authority — is not <em>what</em> they produce, but <em>how</em> they produce it.</p>
<p>ChatGPT doesn’t produce sentences in the same way a reporter does. ChatGPT, and other machine-learning, large language models, may seem sophisticated, but they’re basically just complex autocomplete machines. Only instead of suggesting the next word in an email, they produce the most statistically likely words in much longer packages. </p>
<p>These programs repackage others’ work as if it were something new. It does not “understand” what it produces. </p>
<p>The justification for these outputs can never be truth. Its truth is the truth of the correlation, that the word “sentence” should always complete the phrase “We finish each other’s …” because it is the most common occurrence, not because it is expressing anything that has been observed.</p>
<p>Because ChatGPT’s truth is only a statistical truth, output produced by this program cannot ever be trusted in the same way that we can trust a reporter or an academic’s output. It cannot be verified because it has been constructed to create output in a different way than what we usually think of as being “scientific.” </p>
<p>You can’t check ChatGPT’s sources because the source is the statistical fact that most of the time, a set of words tend to follow each other.</p>
<p>No matter how coherent ChatGPT’s output may seem, simply publishing what it produces is still the equivalent of letting autocomplete run wild. It’s an irresponsible practice because it pretends that these statistical tricks are equivalent to well-sourced and verified knowledge.</p>
<p>Similarly, academics and others who incorporate ChatGPT into their workflow run the existential risk of kicking the entire edifice of scientific knowledge out from underneath themselves. </p>
<p>Because ChatGPT’s output is correlation-based, how does the writer know that it is accurate? Did they verify it against actual sources, or does the output simply conform to their personal prejudices? And if they’re experts in their field, why are they using ChatGPT in the first place?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="a man gives a lecture while reading from two laptop screens" src="https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=304&fit=crop&dpr=1 600w, https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=304&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=304&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=382&fit=crop&dpr=1 754w, https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=382&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/506942/original/file-20230129-36877-sutezm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=382&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Academics have authority on their subject of expertise because there exists a scientific and evidence-based method to verify their work.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Knowledge production and verification</h2>
<p>The point is that ChatGPT’s processes give us no way to verify its truthfulness. In contrast, that reporters and academics have a scientific, evidence-based method of producing knowledge serves to validate their work, even if the results might go against our preconceived notions.</p>
<p>The problem is especially acute for academics, given our central role in creating knowledge. Relying on ChatGPT to write even part of a column means they’re no longer relying on the scientific authority embedded in verified sources. </p>
<p>Instead, by resorting to statistically generated text, they are effectively making an argument from authority. Such actions also mislead the reader, because the reader can’t distinguish between text by an author and an AI.</p>
<p>ChatGPT may produce seemingly legible knowledge, as if by magic. But we would be well advised not to mistake its output for actual, scientific knowledge. One should never confuse coherence with understanding.</p>
<p>ChatGPT promises easy access to new and existing knowledge, but it is a poisoned chalice. Readers, academics and reporters beware.</p><img src="https://counter.theconversation.com/content/198463/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Blayne Haggart receives funding from the Social Sciences and Humanities Research Council of Canada. He is a Senior Fellow with the Centre for International Governance Innovation (CIGI).</span></em></p>ChatGPT is a sophisticated AI program that generates text from vast databases. But it doesn’t understand the information it produces, which also can’t be verified through scientific means.Blayne Haggart, Associate Professor of Political Science, Brock UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1924542022-11-03T16:55:08Z2022-11-03T16:55:08ZHow a quest for mathematical truth and complex models can lead to useless scientific predictions – new research<figure><img src="https://images.theconversation.com/files/492970/original/file-20221102-26-cdyl7i.jpg?ixlib=rb-1.1.0&rect=49%2C0%2C3278%2C2220&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The mathematical concept of a fractal is a never-ending pattern. </span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/grantdaws/13789893314/in/photolist-n1yNRh-n1z7w7-QYKTYY-iU88qq-RFDKVW-qiVTgX-dnKfQE-i4txij-oQ1RaZ-eeY1sk-ehx91S-5NxbxP-anxkVJ-oyM8Wt-mYkFh8-qTEe6g-to6SvE-qNdx6M-pi785j-qNdwjM-qvVLjP-vJxJFP-BkTGbg-pysEuR-oMK9yf-bpwh9N-ef4KcL-dPsse9-pDnWnJ-qiNQEY-sQ7aJv-ADkYoV-dD7gvB-AvEQn3-urorA4-aixXPU-AC9x6L-YNgTpL-say4Bh-VeYB9j-AvMb3T-V6NYPM-ADkX7M-dDcCms-2fbRzRf-omEgin-qBozhn-gZHrGb-dDcDcJ-BqTzfs">G. DAWSON/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>A dominant view in science is that there is a mathematical truth structuring the universe. It is assumed that the scientist’s job is to decipher these mathematical relations: once understood, they can be translated into mathematical models. Running the resulting “silicon reality” in a computer may then provide us with useful insights into how the world works.</p>
<p>Since science keeps on revealing secrets, models keep getting bigger. They integrate discoveries and newly found mechanisms to better reflect the world around us. Many scholars assume that more detailed models <a href="https://www.nature.com/articles/515338a">produce sharper estimates</a> and better predictions because they are closer to reality. But our new research, <a href="https://www.science.org/doi/10.1126/sciadv.abn9450">published in Science Advances</a>, suggests they may have the opposite effect.</p>
<p>The assumption that “more detail is better” cuts across disciplinary fields. The ramifications are enormous. Universities get more and more powerful computers because they want to run bigger and bigger models, requiring an increasing amount of computing power. Recently, the European Commission invested €8bn euros (£6.9bn) to create a very detailed simulation of the Earth (with humans), <a href="https://www.science.org/content/article/europe-building-digital-twin-earth-revolutionize-climate-forecasts">dubbed a “digital twin”</a>, hoping to better address current social and ecological challenges.</p>
<p>In our latest research, we show that the pursuit of ever more complex models as tools to produce more accurate estimates and predictions may not work. Based on statistical theory and mathematical experiments, we ran hundreds of thousands of models with different configurations and measured how uncertain their estimations are. </p>
<p>We discovered that more complex models tended to produce more uncertain estimates. This is because new parameters and mechanisms are added. A new parameter, say the effect of chewing gum on the spread of a disease, needs to be measured – and is therefore subject to measurement errors and uncertainty. Modellers may also use different equations to describe the same phenomenon mathematically. </p>
<p>Once these new additions and their associated uncertainties are integrated into the model, they pile on top of the uncertainties already there. And uncertainties keep on expanding with every model upgrade, making the model output fuzzier at every step of the way – even if the model itself becomes more faithful to reality.</p>
<figure class="align-center ">
<img alt="Server room in the Supercomputing Center of Barcelona." src="https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=344&fit=crop&dpr=1 600w, https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=344&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=344&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=432&fit=crop&dpr=1 754w, https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=432&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/492971/original/file-20221102-28436-yrh578.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=432&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Supercomputer.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/barcelona-spain-january-16-2018-view-1268319157">BearFotos/Shutterstock</a></span>
</figcaption>
</figure>
<p>This affects all models that do not have appropriate validation or training data against which to check the accuracy of their output. This includes global models of climate change, hydrology (water flow), food production and epidemiology alike, as well as all models predicting future impacts. </p>
<h2>Fuzzy results</h2>
<p>In 2009, engineers created an algorithm called Google Flu Trends for predicting the proportion of flu-related doctor visits across the US. Despite being based on 50 million queries that people had typed into Google, the model wasn’t able to predict the 2009 swine flu outbreak. The engineers then made the model, which is no longer operating, even more complex. But it still wasn’t all that accurate. <a href="https://www.sciencedirect.com/science/article/abs/pii/S0169207020301928">Research led by German psychologist Gerd Gigerenzer</a> showed it consistently overestimated doctor visits in 2011–13, in some cases by more than 50%. </p>
<p>Gigerenzer discovered that a much simpler model could produce better results. His model predicted weekly flu rates based only on one teeny piece of data: how many people had seen their GP the previous week. </p>
<p>Another example is global hydrological models, which track how and where water moves and is stored. They started simple in the 1960s based on “evapotranspiration processes” (the amount of water that could evaporate and transpire from a landscape covered in plants) and soon got extended, taking into account domestic, industrial and agricultural water uses at the global scale. The next step for these models is to simulate water demands on Earth for every kilometre each hour.</p>
<p>And yet one wonders if this extra detail will not just make them even more convoluted. We <a href="https://www.nature.com/articles/s41467-021-24508-8">have shown</a> that estimates of the amount of water used in irrigation produced by eight global hydrological models can be calculated with a single parameter only - the extent of the irrigated area. </p>
<h2>Ways forward</h2>
<p>Why has the fact that more detail can make a model worse been overlooked until now? Many modellers do not submit their models to uncertainty and sensitivity analysis, methods that tell researchers how uncertainties in the model affect the final estimation. Many keep on adding detail without working out which elements in their model are most responsible for the uncertainty in the output. </p>
<p>It is concerning as modellers are interested in developing ever larger models – in fact, entire careers are built on complex models. That’s because they are harder to falsify: their complexity intimidates outsiders and complicates understanding what is going on inside the model.</p>
<p>There are remedies, however. We suggest ensuring that models don’t keep getting larger and larger for the sake of it. Even if scientists do perform an uncertainty and sensitivity analysis, their estimates risk getting so uncertain that they become useless for science and policymaking. Investing a lot of money in computing just to run models whose estimate is completely fuzzy makes little sense. </p>
<p>Modellers should instead ponder how uncertainty expands with every addition of detail into the model – and find the best trade-off between the level of model detail and uncertainty in the estimation. </p>
<p>To find this trade-off, one can use the concept of “effective dimensions” – a measure of the number of parameters which add uncertainty to the final output, taking into account how these parameters interact with each other – which we define in our paper. </p>
<p>By calculating a model’s effective dimensions after each upgrade, modellers can appraise whether the increase in uncertainty still makes the model suitable for policy – or, in contrast, if it makes the model’s output so uncertain as to be useless. This increases transparency and helps scientists design <a href="https://www.nature.com/articles/d41586-020-01812-9">models that better serve science and society</a>. </p>
<p>Some modellers may still argue that the addition of <a href="https://www.nature.com/articles/s41558-022-01384-8.">model detail can lead to more accurate estimates</a>. The burden of proof now lies with them.</p><img src="https://counter.theconversation.com/content/192454/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Arnald Puy receives funding from the European Commission (Marie-Sklodowska Curie Global Fellowship, grant number 792178).</span></em></p>The assumption that more detail is better is questioned by a new study.Arnald Puy, Associate Professor in Social and Environmental Uncertainties, University of BirminghamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1821422022-06-20T17:41:45Z2022-06-20T17:41:45ZPeer review: Can this critical step in the publication of science research be kinder?<figure><img src="https://images.theconversation.com/files/469531/original/file-20220617-14-piidla.jpg?ixlib=rb-1.1.0&rect=10%2C107%2C6201%2C4691&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It can be painful for researchers to read harshly worded criticism of their work from peer reviewers.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Democracy has been called the least worst system of government. Peer review is the least worst system for assessing the merit of scientific work. </p>
<p>Peer review is the written evaluation of a paper by other experts in the field. Though this sounds like assessment by equals, the power imbalance created by the roles of reviewer and reviewed distorts the relationship and affects the tone of the review. Reviews can be patronizing, demanding and unkind. </p>
<p>It is painful to read harshly worded criticism of work that has taken a team hundreds or thousands of hours and been submitted hopefully and in good faith. From our experience, we know that reviews can be accurate, robust and make every scientific point while using language and tone that is helpful and supportive.</p>
<h2>Supportive review</h2>
<p>We are a team of editors of an open-access Canadian kidney journal, the <a href="https://journals.sagepub.com/home/cjk"><em>Canadian Journal of Kidney Health and Disease</em></a>. When we founded our journal in 2014, supportive review was the <a href="https://doi.org/10.1186%2F2054-3581-1-1">first of our guiding principles</a>. Since then, we have written supportively as editors, selected reviewers who write supportively and participated in training <a href="https://kidney.ca/Krescent/Home">the next generation of Canadian kidney scientists</a> to conduct reviews that are complete, rigorous and kind. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1536639120843362304"}"></div></p>
<p>Supported by a larger group of like-minded people from multiple disciplines, we recently published <a href="https://doi.org/10.1177%2F20543581221080327">an editorial</a> outlining these principles. A dozen other kidney journals expressed their support for the idea, with <a href="https://dx.doi.org/10.1038/s41581-022-00569-w"><em>Nature Reviews Nephrology</em></a>, <a href="https://doi.org/10.1093/ndt/gfac183"><em>NDT</em></a> and <a href="https://doi.org/10.1007/s00467-022-05535-z"><em>Pediatric Nephrology</em></a> publishing co-ordinated editorials recommitting to principles of constructive criticism.</p>
<h2>The long process of research</h2>
<p>Scientific papers condense a large amount of work into a structured format, usually no longer than four to eight times the length of this article. The work of a paper starts with an idea that may be developed by the team for a year or more before it crystallizes into an application for funding, which may go through rounds of revisions. </p>
<p>Once funded, people and budgets are assigned to the project and the work proceeds. The work can involve the time of multiple team members for months and even years.</p>
<p>When the work is complete, they write a paper, detailing what they did, how and why, what they found and what they think it means. This paper itself is often the product of hundreds of hours of work, with multiple authors contributing their specific expertise and working on the messaging of the whole.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1171575490760630276"}"></div></p>
<p>The journal receives the manuscript and assigns an editor, who assigns peer reviewers. Peer reviewers are other scientists working on similar topics. They must be totally unconnected with the people writing the paper. With notable exceptions, most journals employ single-masked peer review: the reviewer sees the authorship of the paper but the authors of the paper will not see who wrote the review.</p>
<p>Peer reviewers are not paid or rewarded for their review of the manuscript — they take it on as part of the work of academic life. Essentially, it is an unrewarded activity performed by people who are themselves authors. It varies by discipline, but in biomedicine, they may spend three to six hours on a review.</p>
<h2>Harsh reviews</h2>
<p>How does this altruistic activity, undertaken by a reviewer who is very familiar with the author role, lead to such pain and frustration for other authors? </p>
<p>We think that scientists sometimes confuse harshness with intellectual rigour and that a reviewer’s experience of harshness in reviews of their own work, amplified by the power imbalance between reviewer and reviewed, leads to perpetuation of harsh and unhelpful review. Other reviewers and editors avoid these pitfalls entirely.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1506915328705630214"}"></div></p>
<p>“It looks to me like one of your first attempts at scientific publishing, and I can understand that you are also writing in a non-native language” <a href="https://twitter.com/IngridAnell/status/1506915328705630214?s=20&t=2SF4MYOmeNiFo2XW6kNREg">wrote one anonymous reviewer</a> to a mid-career woman scientist with 13 first-author peer-reviewed publications. “I just want to give up today,” she wrote. </p>
<p>But she won’t. Scientists are prepared to receive this kind of feedback and be hurt over and over in the name of science. As editors, we believe there is a better way — that feedback should be rigorous, but will be more readily incorporated if kindly given, to the advancement of science.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1498156968778866693"}"></div></p>
<p>These are not new ideas. In 2006, Prof. Mohan Dutta suggested <a href="https://doi.org/10.1207/s15327027hc2002_11">10 commandments for reviewers</a>, all of which focus on the collaborative nature of relationship between reviewer and reviewed. Advice for reviewers often includes a recommendation to write constructively, though sometimes this is phrased as something like “write constructively, and then turn to criticism,” as if those are mutually exclusive. </p>
<p>We can take this principal further and — thanks to our community of reviewers in kidney medicine — we and other kidney journals make a commitment to kindness in review. Dutta’s 10th commandment is “do unto others as you would have them do unto you.” Every branch of science would be improved by implementing this idea.</p><img src="https://counter.theconversation.com/content/182142/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Catherine Clase has received consultation, advisory board membership or research funding from the Ontario Ministry of Health, Sanofi, Pfizer, Leo Pharma, Astellas, Janssen, Amgen, Boehringer-Ingelheim and Baxter. In 2018 she co-chaired a KDIGO potassium controversies conference sponsored at arm's length by Fresenius Medical Care, AstraZeneca, Vifor Fresenius Medical Care, Relypsa, Bayer HealthCare and Boehringer Ingelheim. Catherine is a member of the Cloth Mask Knowledge Exchange, a research and knowledge translation group that includes industry stakeholders. Industry stakeholders contribute to the Cloth Mask Knowledge Exchange by contributing to grant funding, and through in-kind contributions of time and expertise. Industry stakeholders make masks and distribute polypropylene and other fabrics. She is a member of McMaster's Centre of Excellence in Protective Equipment and Materials, and editor-in-chief of clothmasks.org. Catherine Clase receives funding from CIHR, and is a member of the Green Party, the American Society of Nephrology, the Canadian Society of Nephrology, the American Association of Textile Chemists and Colorists and ASTM International.</span></em></p><p class="fine-print"><em><span>Josee Bouchard receives funding from CIHR, Kidney Foundation of Canada and CDTRP. She is affiliated with the Hopital Sacré-Coeur de Montréal, Université de Montréal. She is a member of the Canadian Society of Nephrology and American Society of Nephrology.</span></em></p><p class="fine-print"><em><span>Manish M Sood receives funding from CIHR, the Kidney Foundation of Canada, the Canadian Medical Association and the Heart and stroke foundation. He is supported by the Jindal Research Chair. He has received speaker fees from Astrazeneca. He is affiliated with the Ottawa Hospital Research Institute, uOttawa and the Ottawa Hospital.</span></em></p><p class="fine-print"><em><span>Rachel Holden receives research funding from CIHR, the South Eastern Ontario Medical Organization, and the Translational Institute of Medicine at Queen's University, Kingston, Ontario. She has received investigator initiated research funding from OPKO Renal. She has received consultation or advisory board funding from Sanofi, Bayer and Oksuka. She is a member of the Canadian Society of Nephrology.</span></em></p><p class="fine-print"><em><span>Sunny Hartwig does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Peer review of research sounds like it should be a conversation between equals. Instead, it can be patronizing, demanding and simply unkind. A group of journal editors thinks this should change.Catherine Clase, Professor of Medicine, Epidemiologist, Physician, McMaster UniversityJosee Bouchard, Nephrologist, Professor of Medicine, Université de MontréalManish M Sood, Physician, Professor of Medicine, L’Université d’Ottawa/University of OttawaRachel Holden, Professor of Medicine, Queen's University, OntarioSunny Hartwig, Associate Professor, University of Prince Edward IslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1819042022-05-12T12:13:36Z2022-05-12T12:13:36ZUsing ‘science’ to market cookies and other products meant for pleasure backfires with consumers<figure><img src="https://images.theconversation.com/files/462319/original/file-20220510-20-renb80.jpg?ixlib=rb-1.1.0&rect=77%2C25%2C4204%2C2818&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Science makes pleasure. </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/full-frame-shot-of-chocolate-chip-cookie-royalty-free-image/977280226">Billy Burdette/EyeEm via Getty Images</a></span></figcaption></figure><p><em>The <a href="https://theconversation.com/us/topics/research-brief-83231">Research Brief</a> is a short take about interesting academic work.</em> </p>
<h2>The big idea</h2>
<p>When companies say a product meant for pleasure was developed using science, consumers are less likely to buy it. That’s what we found in our <a href="https://doi.org/10.1093/jcr/ucac020">peer-reviewed research</a>. </p>
<p>Marketers often describe how a product has been scientifically developed <a href="https://www.ispot.tv/ad/w7k0/turtle-wax-on-a-molecular-level">in their promotions</a>, on <a href="https://store.skinbetter.com/shop/best-sellers">product packaging</a> and on <a href="https://www.sweetscienceicecream.com/">websites</a>. Over 10 studies, we examined when consumers like products created with science and when they do not. We found that it depends on what the marketer is trying to sell: pleasure or practicality. </p>
<p>In our first study, we recruited 511 students to select a chocolate chip cookie among three options on a menu: “Luscious chocolatey taste,” “Our most scrumptious cookie” and “Loads of ooey-gooey chocolate chunks.” Half the students, however, were told the first option was “scientifically developed to have a luscious chocolatey taste.”</p>
<p>Participants informed about the involvement of science were 31% less likely to pick the first option relative to the other two. This suggested that people don’t like to associate science with delectable treats – all of which were identical and given to the students. </p>
<p>Two similar studies involving cookies confirmed this finding and even demonstrated that merely the mention of “science” can be a turnoff. </p>
<p>In a separate study involving smoothies, we aimed to rule out different influences on these findings. We asked 402 people on the online crowdsourcing website MTurk to rate their interest in purchasing a smoothie after reading a marketing slogan. They randomly read one of two phrases: “Our rigorous scientific development process ensures that JTB smoothies taste delicious, indulgent, and creamy” or “We ensure that JTB smoothies taste delicious, indulgent, and creamy.”</p>
<p>We found that those told about the science were 14% less likely to want to purchase the smoothie than the others.</p>
<p>We aimed to see if these findings would also apply to non-consumable products, which can be seen both as pleasurable and practical. </p>
<p>In one study, we asked 1,013 Americans on MTurk to review a promotional message about a brand of body wash and rate their purchase intentions. Half read that the body wash will “immerse your senses in an indulgent experience” – an appeal to pleasure – while the others learned it will “wash away odor causing bacteria” – the practical appeal. Furthermore, half of participants in each group were either told the product was scientifically developed or were given no information about its development. </p>
<p>We found that participants who received the pleasure appeal were much less likely to want to buy the product when told that science was involved. But those who were informed of its more utilitarian qualities were more likely to buy when the science was mentioned. </p>
<p>Subsequent studies showed that companies can reduce this “backfire effect” by emphasizing that science is necessary to produce the product – such as by explaining that chemistry is necessary for baking. We also found that people who have a high degree of <a href="https://urldefense.com/v3/__https:/www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/__;!!KGKeukY!wXc3ExWnXujAouk8PFq_iQt4hTejrYfs3QVDEgftSDifPMDTQEo2X4SfibUol_YWUBXynHUWa8Oi4OfA0_tFrlMihzw$">trust in scientists</a> or work in related fields do not respond negatively to the mention of science. </p>
<h2>Why it matters</h2>
<p>Science, and the scientific method, is vital to producing just about everything you see around you, from <a href="https://www.acs.org/content/acs/en/education/resources/highschool/chemmatters/past-issues/archive-2014-2015/smartphones.html">computers and smartphones</a> to <a href="https://www.acs.org/content/acs/en/pressroom/reactions/videos/2016/how-does-shampoo-work.html">shampoo</a> and – you guessed it – <a href="https://www.npr.org/sections/thesalt/2013/12/03/248347009/cookie-baking-chemistry-how-to-engineer-your-perfect-sweet-treat">chocolate chip cookies</a>. </p>
<p>But our studies suggest that many consumers have mixed feelings about the promotion of science in product development.</p>
<p>We believe this occurs because people stereotype the scientific process as being competent but cold, similar to how they <a href="https://www.americanscientist.org/blog/macroscope/scientists-who-selfie-break-down-stereotypes">stereotype scientists</a>. As consumers, people tend to <a href="http://doi.org/10.1057/pb.2013.5">associate enjoyment and sensory pleasure with warmth</a>. Put another way, it was the feeling that a product focused on pleasure is an odd fit with science that reduced people’s desire to buy it.</p>
<h2>What still isn’t known</h2>
<p>Future research can explore other ways to turn off the science backfire effect for pleasure-focused products. We don’t yet know if there are ways to change people’s beliefs about science so that it doesn’t seem like such a mismatch with pleasure.</p><img src="https://counter.theconversation.com/content/181904/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>New research found that consumers were less likely to buy a product associated with pleasure if marketers emphasized it was developed with science.Rebecca Walker Reczek, Berry Chair of New Technologies in Marketing and Professor of Marketing, The Ohio State UniversityAviva Philipp-Muller, Doctoral candidate in Social Psychology, The Ohio State UniversityJohn Costello, Assistant Professor of Marketing, University of Notre DameLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1772262022-02-23T02:56:17Z2022-02-23T02:56:17ZCurious Kids: what is the most important thing a scientist needs?<blockquote>
<p><strong>What is the most important thing a scientist needs? – Casey, age 6, Perth</strong></p>
</blockquote>
<p><a href="https://theconversation.com/id/topics/curious-kids-83797"><img src="https://images.theconversation.com/files/386375/original/file-20210225-21-1xfs1le.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=90&fit=crop&dpr=2" width="100%"></a></p>
<p>Hi Lennox! Thanks for this great question. Unfortunately, there’s not really one simple answer. So I’m going to talk about three important things. </p>
<p>Scientists need to be good at asking questions. They need to be good at investigating the world to find answers to their questions. And they need to keep in mind that no matter how much they know, there’s always more to learn.</p>
<h2>Asking questions</h2>
<p>Most scientists are inspired by wanting to understand how things in the world work. That means they start by asking questions. </p>
<p>The questions might be driven by curiosity about something amazing in nature, like “Why do stars look like they’re twinkling?” or “Why do these birds have such fancy feathers?” Or they might be driven by wanting to help communities (or even the whole world) with a problem, like “How can we keep this river healthy?” or “What can we do about climate change?” </p>
<p>But all good scientific questions have something in common: they will point scientists towards some sort of <em>investigation</em> they can do to try and find out an answer.</p>
<p>Scientists investigate in many different ways. Some examples are observing how animals behave in the wild, measuring how plants grow over time, doing an experiment in a lab, or using a computer to create a virtual version (called a simulation) of a black hole.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-can-black-holes-become-white-holes-176034">Curious Kids: can black holes become white holes?</a>
</strong>
</em>
</p>
<hr>
<h2>Finding answers</h2>
<p>Different scientific questions call for different sorts of answers. Here are some examples (asked by other curious kids!).</p>
<p><a href="https://theconversation.com/why-do-onions-make-you-cry-129519">Why do onions make you cry?</a> <a href="https://theconversation.com/curious-kids-how-are-ants-and-other-creatures-able-to-walk-on-the-ceiling-173712">How do ants walk on the ceiling?</a> These questions call for <em>explanations</em>: telling us why or how something works the way it does. </p>
<p><a href="https://theconversation.com/curious-kids-could-octopuses-evolve-until-they-take-over-the-world-and-travel-to-space-156493">Could octopuses evolve until they take over the world and travel to space?</a> This question calls for some explanation about octopuses and also a <em>prediction</em> about what might (or might not) happen in the future. </p>
<p><a href="https://theconversation.com/how-many-stars-are-there-in-space-165370">How many stars are there in space?</a> This question calls for a <em>number</em> (but it helps if the answer explains a bit, too). </p>
<figure class="align-center ">
<img alt="Artist's impression of black hole surrounded by stars" src="https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/447978/original/file-20220223-19-1iaz555.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Science doesn’t have to involve experiments. It could also mean making a computer simulation of a black hole.</span>
<span class="attribution"><span class="source">NASA</span></span>
</figcaption>
</figure>
<p>How do scientists investigate the world to find answers? It often takes a lot of training and some creativity. There is a thing called the <em>scientific method</em> which you can think of as a sort of recipe for doing science. It goes like this:</p>
<ol>
<li><p>Ask a <em>question</em></p></li>
<li><p>come up with a guess (called a <em>hypothesis</em>) about an answer to your question</p></li>
<li><p>do an <em>experiment</em> to test your hypothesis</p></li>
<li><p><em>report</em> what you learned, so others can learn from it too.</p></li>
</ol>
<p>This is a good way to do science, and many scientists always follow these steps. But many others don’t. Some scientists do experiments. Some do observations instead, or create models and simulations of the things they want to learn about. </p>
<p>Also, not all scientific projects start with a hypothesis and then test it. Some start with big open-ended questions and investigate them by exploring. There is really no such thing as <em>the</em> scientific method. There is a whole family of scientific methods. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-many-stars-are-there-in-space-165370">How many stars are there in space?</a>
</strong>
</em>
</p>
<hr>
<h2>There is always more to learn</h2>
<p>Becoming a scientist takes a lot of learning. But it is important for scientists to keep in mind they don’t know everything. A fancy name for this is <em>intellectual humility</em>. “Intellectual” has to do with how clever we are, and “humility” has to do with recognising our own limits. </p>
<p>So, “intellectual humility” means being aware that you’ll sometimes get things wrong. It also means listening to other peoples’ ideas rather than just thinking you’re right all the time.</p>
<p>The relationship between science and truth is complicated. Scientists work hard to learn true things about the world. But the things we think are true change over time. A few hundred years ago, people thought that when we get sick it’s because of some sort of poison in the air. Then we learned about bacteria and viruses, and figured out they can make us sick. But we still haven’t figured out everything about how that works.</p>
<p>It’s great to be curious – there’s always more to learn!</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/curious-kids-could-octopuses-evolve-until-they-take-over-the-world-and-travel-to-space-156493">Curious Kids: could octopuses evolve until they take over the world and travel to space?</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/177226/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Emily Parke receives Marsden funding from The Royal Society of New Zealand Te Apārangi.</span></em></p>Scientists need to be good at asking questions, investigating the world to find answers, and keeping in mind that no matter how much they know, there’s always more to learn.Emily Parke, Senior Lecturer in Philosophy, University of Auckland, Waipapa Taumata RauLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1703702021-10-28T19:15:32Z2021-10-28T19:15:32ZThe ‘97% climate consensus’ is over. Now it’s well above 99% (and the evidence is even stronger than that)<figure><img src="https://images.theconversation.com/files/428997/original/file-20211028-17-1z114vn.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C6233%2C3648&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Martin Meissner/AP</span></span></figcaption></figure><p>Despite the <a href="https://theconversation.com/99-999-certainty-humans-are-driving-global-warming-new-study-29911">overwhelming evidence</a>, it’s still common to see <a href="https://theconversation.com/australias-stumbling-last-minute-dash-for-climate-respectability-doesnt-negate-a-decade-of-abject-failure-169891">politicians</a>, media commentators or social media users cast doubt on the role of humans in driving climate change.</p>
<p>But this denialism is now almost nonexistent among climate scientists, as a <a href="https://iopscience.iop.org/article/10.1088/1748-9326/ac2966">study released this month</a> confirms. US researchers examined the peer-reviewed literature and found more than 99% of climate scientists now endorse the evidence for human-induced climate change. </p>
<p>That’s even higher than the 97% reported by an influential <a href="https://iopscience.iop.org/article/10.1088/1748-9326/8/2/024024/meta">2013 study</a>, which has become a widely cited statistic by both climate change deniers and those who accept the evidence. </p>
<p>Why has the needle evidently shifted even more firmly in favour of the evidence-based consensus? Or, to put it another way, what happened to the 3% of researchers who rejected the consensus of human caused climate change? Is this change purely because of the growing weight of evidence published over the past few years?</p>
<h2>Unpicking the polls</h2>
<p>We must first ask whether the two studies are directly comparable. The answer is yes. The <a href="https://iopscience.iop.org/article/10.1088/1748-9326/ac2966">latest study</a> has reexamined the literature published since 2012, and is based on the same methods as the <a href="https://theconversation.com/consensus-confirmed-over-90-of-climate-scientists-believe-were-causing-global-warming-57654">2013 study</a>, albeit with some important refinements.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/consensus-confirmed-over-90-of-climate-scientists-believe-were-causing-global-warming-57654">Consensus confirmed: over 90% of climate scientists believe we're causing global warming</a>
</strong>
</em>
</p>
<hr>
<p>Both studies searched the <a href="https://clarivate.com/webofsciencegroup/solutions/web-of-science/">Web of Science</a> database – an independent worldwide repository of scientific paper citations – using the keywords “global climate change” and “global warming”. However, the recent study added “climate change” to the other two keyword searches, because the <a href="https://iopscience.iop.org/article/10.1088/1748-9326/ac2966">authors found</a> that most climate-contrarian papers would not have been returned with only the two original terms.</p>
<p>The <a href="https://iopscience.iop.org/article/10.1088/1748-9326/8/2/024024/meta">2013 study</a> examined 11,944 climate research papers and found almost one-third of them expressed a position on the cause of global warming. Of these 4,014 papers, 97% endorsed the consensus position that humans are the cause, 1% were uncertain, and 2% explicitly rejected it.</p>
<p>A <a href="https://link.springer.com/article/10.1007/s00704-015-1597-5">2015 review</a> examined 38 climate-contrarian papers published over the preceding decade, and identified a range of methodological flaws and sources of bias.</p>
<p>One of the <a href="https://qz.com/1069298/the-3-of-scientific-papers-that-deny-climate-change-are-all-flawed/">reviewers commented</a> that “every single one of those analyses had an error – in their assumptions, methodology, or analysis – that, when corrected, brought their results into line with the scientific consensus”.</p>
<p>For example, many of the contrarian papers had “<a href="https://dictionary.cambridge.org/dictionary/english/cherry-pick">cherrypicked</a>” results that supported their conclusion, while ignoring important context and other data sources that contradicted it. Some of them simply ignored fundamental physics.</p>
<p>The 2015 reviewers also made the important point that “science is never settled and that both mainstream and contrarian papers must be subjected to sustained scrutiny”. This is the cornerstone of the <a href="https://www.britannica.com/science/scientific-method">scientific method</a>, and few if any climate scientists would disagree with this statement. </p>
<h2>Separating the human influence from the natural</h2>
<p>The recently published Intergovernmental Panel for Climate Change (IPCC) <a href="https://www.ipcc.ch/assessment-report/ar6/">Synthesis Report</a>, says “it is unequivocal that human influence has warmed the atmosphere, ocean and land”, and warns that the Paris Agreement goals of 1.5°C and 2°C above pre-industrial levels will be <a href="https://theconversation.com/if-all-2030-climate-targets-are-met-the-planet-will-heat-by-2-7-this-century-thats-not-ok-170458">exceeded during this century</a> without dramatic emissions reductions.</p>
<p>In reaching this conclusion, it is important to distinguish between changes caused by <a href="https://www.science.org.au/curious/earth-environment/enhanced-greenhouse-effect">human activities</a> altering the atmosphere’s chemistry, and <a href="https://www.pacificclimatefutures.net/en/help/climate-projections/understanding-climate-variability-and-change/">climate variability</a> caused by natural factors. </p>
<p>These natural variations include small changes in the <a href="https://climate.nasa.gov/blog/2910/what-is-the-suns-role-in-climate-change/">Sun’s energy output</a> due to sunspots and solar flares, infrequent <a href="https://www.usgs.gov/natural-hazards/volcano-hazards/volcanoes-can-affect-climate">volcanic eruptions</a>, and the effects of <a href="https://www.carbonbrief.org/interactive-much-el-nino-affect-global-temperature">El Niño</a> weather patterns in the Pacific Ocean. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Graphs of global temperatures" src="https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=314&fit=crop&dpr=1 600w, https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=314&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=314&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=394&fit=crop&dpr=1 754w, https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=394&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/428996/original/file-20211028-17-1y4ntru.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=394&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">History of global temperature change and causes of recent warming.</span>
<span class="attribution"><span class="source">IPCC</span></span>
</figcaption>
</figure>
<p>Excluding these natural variations, Earth’s surface temperature was generally stable from about 2,000 to 1,000 years ago. After that, the planet cooled by about 0.3°C over <a href="https://www.ipcc.ch/assessment-report/ar6/">several centuries</a>, before the advent of fossil fuel-based industrialisation in the 1800s.</p>
<p>One <a href="https://www.researchgate.net/publication/344709426_Prominent_role_of_volcanism_in_Common_Era_climate_variability_and_human_history">study</a> identified 12 major volcanic eruptions from 100 to 1200 CE, compared with 17 eruptions from 1200 to 1900 CE. Hence, heightened volcanic activity over roughly the past 800 years was associated with a general global cooling before the industrial revolution. </p>
<p>Current rates of global warming are <a href="https://www.ipcc.ch/assessment-report/ar6/">unprecedented</a> in more than 2,000 years and temperatures now exceed the warmest (multi-century) period in <a href="https://www.ipcc.ch/assessment-report/ar6/">more than 100,000 years</a>. Global average <a href="https://www.ipcc.ch/assessment-report/ar6/">surface temperature</a> for the decade from 2011-20 was about 1.1°C higher than in 1850-1900. Each of the past four decades has been warmer than any preceding decade since 1850, when reliable weather observations began.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/99-999-certainty-humans-are-driving-global-warming-new-study-29911">99.999% certainty humans are driving global warming: new study</a>
</strong>
</em>
</p>
<hr>
<p>Researchers can separate human and natural factors in the modern global temperature record. This involves a process called <a href="https://www.climate.gov/maps-data/climate-data-primer/predicting-climate/climate-models">hindcasting</a>, in which a climate model is run backwards in time to simulate human and natural factors, and then compared with the observed data to see which combination of factors most accurately recreates the real world. </p>
<p>If human factors are removed from the data set and only volcanic and solar factors are included, then global average surface temperatures since 1950 should have remained similar to those over the preceding 100 years. But of course they haven’t.</p>
<p>The evidence, and the scientific consensus on it, are both clearer than ever.</p><img src="https://counter.theconversation.com/content/170370/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Steve Turton has previously received funding from the Australian Government. Steve is the independent chair of the Wet Tropics Healthy Waterways Partnership, an initiative of the Reef 2050 Long Term Sustainability Plan.</span></em></p>One of the most famous stats in the climate debate is the 97% of scientists who endorse the consensus on human-induced global heating. Ahead of the Glasgow summit, that figure has climbed even higher.Steve Turton, Adjunct Professor of Environmental Geography, CQUniversity AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1676812021-09-15T23:01:00Z2021-09-15T23:01:00ZA researcher’s view on COVID-19 vaccine hesitancy: The scientific process needs to be better explained<figure><img src="https://images.theconversation.com/files/420655/original/file-20210912-27-1x5nmgm.jpg?ixlib=rb-1.1.0&rect=0%2C45%2C3798%2C2644&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">In the reluctance to vaccinate, there is a lack of trust and understanding of the scientific process. Better communication would help rebuild bridges. </span> <span class="attribution"><span class="source">The Canadian Press/Paul Chiasson</span></span></figcaption></figure><iframe style="width: 100%; height: 175px; border: none; position: relative; z-index: 1;" allowtransparency="" src="https://narrations.ad-auris.com/widget/the-conversation-canada/a-researcher’s-view-on-covid-19-vaccine-hesitancy--the-scientific-process-needs-to-be-better-explained" width="100%" height="400"></iframe>
<p><a href="https://theconversation.com/what-scientists-are-doing-to-develop-a-vaccine-for-the-new-coronavirus-131255">When I first wrote about the arrival of SARS-CoV-2</a> in early March 2020, the question was whether or not the new virus would become a pandemic. At the time, most experts believed that we had already reached the point of no return.</p>
<p>Today, 18 months later, the answer is clear. You don’t need to be a scientist to know it. This pandemic is the worst public health emergency of international concern that our modern society has faced. To date, <a href="https://www.who.int/emergencies/diseases/novel-coronavirus-2019?adgroupsurvey=%7Badgroupsurvey%7D&gclid=EAIaIQobChMIyfmOzMHy8gIVkYjICh3I8wo5EAAYAiAAEgKQ3_D_BwE">more than 215 million cases have been confirmed and 4.5 million deaths have been reported globally</a>.</p>
<p>These are just the reported cases. In reality, the number of cases is higher, and for a variety of reasons: lack of diagnostic capacity, infection without symptoms, unwillingness or inability to be tested or to visit a health facility, etc. The number of deaths due to COVID-19 is probably underestimated, both <a href="https://www.cp24.com/mobile/news/death-certificates-don-t-accurately-reflect-the-toll-of-the-pandemic-experts-say-1.5326970?cache=/7.363087">in Canada</a> and <a href="https://www.who.int/data/stories/the-true-death-toll-of-covid-19-estimating-global-excess-mortality">worldwide</a>.</p>
<p>In addition to changing the way we live our daily lives, the pandemic has brought scientific processes to public attention. Researchers, used to working in the shadows, now had to provide solutions — and explanations — to a very real threat, and they have been doing this under the watchful eye of the public.</p>
<figure class="align-right ">
<img alt="" src="https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/410911/original/file-20210712-19-geybnm.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://theconversation.com/ca/topics/vaccine-confidence-in-canada-107061">Click here for more articles in our series about vaccine confidence.</a></span>
</figcaption>
</figure>
<p>One of these solutions, vaccination, is far from new. Yet no matter what the context, <a href="https://timesofsandiego.com/opinion/2021/09/08/anti-vax-movement-has-a-long-deadly-history-from-smallpox-to-covid/">it has always generated news</a>. So where are we now?</p>
<p>Still in our laboratories! I recently completed my PhD in microbiology-immunology at Laval University, research that I conducted under the supervision of <a href="https://ipolitics.ca/2020/09/21/leading-vaccine-developer-walks-out-on-federal-vaccine-task-force/">Professor Gary Kobigner</a>, who is known for co-developing an effective vaccine and treatment for Ebola. This fall, I will begin a postdoctoral fellowship at the Galveston National Laboratory in Texas, where I will continue my work on the transmission of, and vaccine development against, severe pathogens.</p>
<h2>Relevant questions</h2>
<p>The World Health Organization (WHO) currently lists <a href="https://www.who.int/news-room/q-a-detail/coronavirus-disease-covid-19">13 available COVID-19 vaccines, based on four different platforms, including mRNA vaccines and viral vector vaccines</a>. Globally, more than five billion doses of vaccines have been administered. In Canada, five of these vaccines are currently approved for use: <a href="https://health-infobase.canada.ca/covid-19/vaccine-administration/">Pfizer-BioNTech, Moderna, AstraZeneca, COVISHIELD and Janssen</a>, with <a href="https://www.canada.ca/en/public-health/services/diseases/2019-novel-coronavirus-infection/prevention-risks/covid-19-vaccine-treatment/vaccine-rollout.html#a4">Pfizer-BioNTech, Moderna and AstraZeneca</a> in wide distribution. Combined, these vaccines have been administered to approximately <a href="https://health-infobase.canada.ca/covid-19/vaccination-coverage/">70 per cent</a> of Canadians.</p>
<figure class="align-center ">
<img alt="A woman administers a vaccine to another woman, seated, from behind" src="https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=396&fit=crop&dpr=1 600w, https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=396&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=396&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=497&fit=crop&dpr=1 754w, https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=497&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/420137/original/file-20210909-23-1miromd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=497&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">A woman receives her COVID-19 vaccine at Olympic Stadium in Montréal. Five vaccines have been approved in Canada and about 70 per cent of the population is doubly vaccinated.</span>
<span class="attribution"><span class="source">The Canadian Press/Paul Chiasson</span></span>
</figcaption>
</figure>
<p>However, <a href="https://theconversation.com/i-work-at-a-covid-19-vaccine-clinic-heres-what-people-ask-me-when-theyre-getting-their-shot-and-what-i-tell-them-167046">many people have raised questions about these vaccines</a>. And it is fair to do so! The unknown has always been a source of anxiety for human beings, it is normal to <a href="https://theconversation.com/astrazeneca-covid-19-vaccine-faq-why-do-the-age-recommendations-keep-changing-does-it-cause-vipit-blood-clots-is-it-effective-against-variants-158302">ask questions</a>.</p>
<p>So, after working tirelessly to develop vaccines against COVID-19, what are scientists and doctors doing now?</p>
<p>They are doing what they have always done: Practising the best science they can within the limits of current knowledge. This scientific practice means continuing to evaluate the effectiveness of these vaccines <a href="https://www.who.int/en/activities/tracking-SARS-CoV-2-variants/">against new variants</a> in labs, as the virus continues to mutate. </p>
<p>It means continuing to record who has experienced side-effects (serious or not) from vaccination and continuing to investigate the potential links between these side-effects and the vaccine. The science they are practising involves studying the virus day and night to understand how it makes people sick, how we can prevent infection and what our options are for getting rid of it as quickly as possible.</p>
<p>The term “current knowledge” is very important here. It is possible that more side-effects related to vaccination will be discovered much later. Here’s why.</p>
<h2>The scientific method</h2>
<p>When vaccines are initially developed in the laboratory and tested on animals, it is normal that <em>not</em> all side-effects are identified. A mouse is not a human, after all, and models cannot account for all the variables that can be found in a human. Humans live in a complex environment and society where individuals each have their own genetics, immunity and lifestyle (exercise, smoking, nutrition).</p>
<p>Furthermore, the more people are vaccinated, the greater the likelihood of detecting a serious side-effect. Clinical trials, where <a href="https://theconversation.com/explainer-how-clinical-trials-test-covid-19-vaccines-146061">drugs and vaccines are evaluated in a small group of individuals</a> before being made available to the general population, are designed to be safe. Volunteers are usually healthy adults, without serious <a href="https://www.inspq.qc.ca/en/publications/3082-impact-comorbidities-risk-death-covid19">pre-existing medical conditions</a>.</p>
<hr>
<p>
<em>
<strong>
À lire aussi :
<a href="https://theconversation.com/explainer-how-clinical-trials-test-covid-19-vaccines-146061">Explainer: How clinical trials test COVID-19 vaccines</a>
</strong>
</em>
</p>
<hr>
<p>Vaccination is now widespread in many countries. It is therefore statistically normal that rarer effects (for example, ones that one in a million people develop) are now being observed. These effects are too rare to have been detected in a clinical trial of 10,000 people. This is the case for rare side-effects such as <a href="https://www.forbes.com/sites/siladityaray/2021/09/09/european-medicines-agency-lists-nerve-disorder-as-very-rare-side-effect-of-astrazeneca-covid-19-vaccine/?sh=5fd603e61a7b">Guillain-Barré syndrome</a> and <a href="https://healthycanadians.gc.ca/recall-alert-rappel-avis/hc-sc/2021/76203a-eng.php">Bell’s palsy</a>.</p>
<p>The <a href="https://www.sciencebuddies.org/science-fair-projects/science-fair/steps-of-the-scientific-method">scientific method</a> requires that the following process is followed: Observe a problem, formulate a hypothesis about its possible causes, evaluate it experimentally by controlling the variables, interpret the results and draw a conclusion.</p>
<p>It can turn out that our initial hypothesis is wrong, and that is equally acceptable. This is how science was designed. I think that before the pandemic, people considered science infallible. Opening up research to the general public has greatly changed this perception, especially as science quickly became embroiled in politics, particularly over <a href="https://www.who.int/health-topics/coronavirus/origins-of-the-virus">the question of the origin of the pandemic</a>.</p>
<figure class="align-center ">
<img alt="Justin Trudeau is surrounded by scientists, in a lab" src="https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/420138/original/file-20210909-21-17ccvfk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Prime Minister Justin Trudeau with scientists during a visit to the National Research Council of Canada (NRC), in Montréal, August 2020. The scientific method makes it possible to observe a problem, formulate a hypothesis about its causes, evaluate it experimentally by controlling the variables, interpret the results and draw a conclusion.</span>
<span class="attribution"><span class="source">The Canadian Press/Graham Hughes</span></span>
</figcaption>
</figure>
<h2>Knowing how to communicate</h2>
<p>And that’s where the problem comes from, among other things. <a href="https://doi.org/10.1038/d41586-020-00452-3">The key to effective scientific communication is not the science. It’s the communication</a>. The results of laboratory experiments and clinical trials are what they are. Either the vaccine or drug works to reduce mortality, or it doesn’t work, and we go back to the drawing board.</p>
<p>So where does the reluctance about vaccines come from? One of the main problems is not the lack of information about the safety of the vaccine. Almost everyone has access to this information on internet. The problem is the lack of trust in institutions, <a href="https://www.cairn-int.info/journal-revue-internationale-de-politique-comparee-2003-3-page-433.htm">which has been growing globally in recent years</a>.</p>
<hr>
<p>
<em>
<strong>
À lire aussi :
<a href="https://theconversation.com/how-better-conversations-can-help-reduce-vaccine-hesitancy-for-covid-19-and-other-shots-159321">How better conversations can help reduce vaccine hesitancy for COVID-19 and other shots</a>
</strong>
</em>
</p>
<hr>
<p>But this trust can be earned — or regained. It just takes time, respect and empathy. A study by researchers at the <a href="https://doi.org/10.1080/21645515.2018.1549451">Centre Hospitalier Universitaire de Sherbrooke</a> shows that an educational session about immunization that used motivational interviewing techniques with parents of infants resulted in a nine per cent increase in immunization rates compared with families who did not receive the sessions.</p>
<h2>Finding the right answer to a question</h2>
<p>Ultimately, the goal of science is to find the right answer to a question.</p>
<p>Of course, human nature being what it is, we are not immune to conflicts of interest. We need to ensure transparency about things like funding and links between scientists and potential investors. This is especially important since we are all responsible for funding research, whether through federal subsidies, which are partly derived from taxes paid by citizens, or through the ordinary purchase of drugs in pharmacies.</p>
<p>Since this concerns everyone, it is high time that the public became more involved. After all, scientific discoveries and health measures are everybody’s business. For example, few citizens are familiar with “<a href="https://www.ncbi.nlm.nih.gov/books/NBK285579/">gain-of-function research</a>.” These studies can involve a level of risk ranging from very low to very high. For example, producing a drug from a bacterium carries little risk and much benefit. However, increasing the virulence or transmissibility of a virus such as Ebola or Influenza could carry a lot of risk if such research were carried out by individuals with bad intentions, or in poorly secured laboratories.</p>
<hr>
<p>
<em>
<strong>
À lire aussi :
<a href="https://theconversation.com/origins-of-sars-cov-2-why-the-lab-leak-idea-is-being-considered-again-161947">Origins of SARS-CoV-2: Why the lab-leak idea is being considered again</a>
</strong>
</em>
</p>
<hr>
<p>As with any aspect of science, a risk-benefit analysis must be carried out. Note that in the vast majority of institutions where research is done, the committees assessing whether or not a study is worth doing are not only composed of scientists and students, but also members of the public.</p>
<p>Now each side just has to do its part. Scientists need to do a better job of communicating their results and the interpretation of them, as well as specifically answering questions of interest to the public and regaining their trust. They need to listen and stop hiding behind mountains of data, complicated words and scientific articles that are not easily accessible to the general public.</p>
<p>To those who are hesitant about vaccination, scientists should ask: “What data would make you change your mind?”, “Why do you think the current data are insufficient?”, “Why do you trust this individual, but not another or the institutions?” This is how constructive dialogue can be initiated and more in-depth reflection can begin.</p>
<p>For their part, citizens can adopt better practices when it comes to getting information and not only consider information that fits into their personal narrative. It is also important to avoid falling into a spiral of conspiracy theories and trust in false experts. It is important to not be afraid to doubt, to find other sources to confirm or refute what you have just read and to ask trusted experts around you what they think.</p>
<p><em>Do you have a question about COVID-19 vaccines? Email us at <a href="mailto:ca-vaccination@theconversation.com">ca‑vaccination@theconversation.com</a> and vaccine experts will answer questions in upcoming articles.</em></p><img src="https://counter.theconversation.com/content/167681/count.gif" alt="La Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marc-Antoine De La Vega ne travaille pas, ne conseille pas, ne possède pas de parts, ne reçoit pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'a déclaré aucune autre affiliation que son organisme de recherche.</span></em></p>Before the pandemic, the public perceived science as infallible and inaccessible. But the opening up of research to the general public has changed that perception.Marc-Antoine De La Vega, PhD Student in Microbiology-Immunology, Université LavalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1654652021-08-03T19:15:27Z2021-08-03T19:15:27ZLet’s choose our words more carefully when discussing mātauranga Māori and science<figure><img src="https://images.theconversation.com/files/414267/original/file-20210803-27-ac96px.jpg?ixlib=rb-1.1.0&rect=12%2C12%2C8445%2C3474&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">www.shutterstock.com</span></span></figcaption></figure><p>Responding to the <a href="https://www.nzherald.co.nz/nz/scientists-rubbish-auckland-university-professors-letter-claiming-maori-knowledge-is-not-science/GN55DAZCM47TOZUTPYP2Q3CSLM/">recent controversy</a> over <a href="https://www.tandfonline.com/doi/full/10.1080/03036758.2016.1252407">mātauranga Māori</a> and the letter he co-authored titled “In defence of science”, Emeritus Professor <a href="https://www.newshub.co.nz/home/new-zealand/2021/08/watch-the-hui-monday-august-2.html">Michael Corballis said</a>: “We don’t know any Māori who knows what mātauranga is.” </p>
<p>This immediately made us wonder: what would happen if we asked a group of scientists what science is?</p>
<p>Common responses to the question “what is science?” focus on causal explanations, controlled experiments, hypothesis testing or <a href="https://plato.stanford.edu/entries/pseudo-science/#Fals">falsification</a> (those are popular options, not an exhaustive list). </p>
<p>All point to important aspects of science, and all have been proposed as ways of defining it. But there is no single answer to the question “what is science?”.</p>
<p>This doesn’t mean people can characterise science however they want. Far from it. Our point, instead, is that questions like “what is science?” or “is mātauranga science?” could be asking about any number of different ideas.</p>
<p>Ambiguous statements are poor starting points for careful, constructive debates. We see people talking past each other in discussions of mātauranga and science. These discussions could benefit from more careful articulation of the concepts at stake. We’ll start with science.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1422284328315195407"}"></div></p>
<h2>What is science?</h2>
<p>When we ask what something is, we often seek a definition of that thing. But whereas some concepts are pretty easily defined (electron, uncle), some aren’t (art, life, science).</p>
<p>When we ask a question about a hard-to-define concept – “<a href="https://plato.stanford.edu/entries/art-definition/">what is art?</a>” or “<a href="https://www.quantamagazine.org/what-is-life-its-vast-diversity-defies-easy-definition-20210309/">what is life</a>?” – dictionary definitions aren’t much use, because what we are after is an understanding of the range of conceptual work the term does for us.</p>
<p>So, when we ask “what is science?”, what are we asking? One way to answer is to list methodologies that many or most scientists use, such as testing hypotheses, conducting controlled experiments or gathering empirical evidence.</p>
<p>Another way to answer is to point to a list of goals and values – yes, <a href="https://plato.stanford.edu/entries/scientific-objectivity/#ObjeAbseNormCommValuFreeIdea">despite the myth of a value-free ideal</a>, values are part of science – that many or most scientists strive for. These include reproducibility, empirical accuracy or reliable causal knowledge of how the world works.</p>
<p>Yet another approach shifts away from listing science’s characteristic hallmarks and points to its status. Here, we might answer the question “what is science?” by saying something like, “science represents our best empirical knowledge of how the natural world works”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1420548477956739074"}"></div></p>
<h2>The many faces of science</h2>
<p>Any of those answers can be framed generically. We can talk about science universally: as a set of methodologies anyone can employ, values anyone can strive for, or status any body of knowledge can achieve, at a given time, in a given domain.</p>
<p>We can also talk about science in a specific way: as a modern institution housed in universities, companies and NGOs. We can talk about the history and culture of this institution: it traces back to the Enlightenment and to earlier times and places, and it is funded by governments and industry and rich donors.</p>
<p>We can talk about things this institution, or particular people involved in it, have done throughout its history: discovered antiseptics and subatomic particles and the structure of DNA, exploited indigenous peoples around the world in the name of research, come together globally to develop COVID vaccines in under a year.</p>
<p>We see all the above understandings of science — methodological, epistemic, status-based, universal and specific — on display, and often run together, in the recent debate about mātauranga and science. And that’s not even an exhaustive list of ways to address the question “what is science?”.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1421565398307074050"}"></div></p>
<h2>Slow down, show respect</h2>
<p><a href="https://www.tandfonline.com/doi/full/10.1080/03036758.2016.1252407">Mātauranga</a> spans Māori knowledge, culture, values and worldview. When someone asks, “is mātauranga science?”, there is a range of things they could really be asking about, including: </p>
<ul>
<li><p>does mātauranga (or do forms of it) use scientific methodologies to <a href="https://scientists.org.nz/resources/Documents/NZSR/NZSR76(1-2).pdf">generate knowledge</a>?</p></li>
<li><p>do we value mātauranga as a valid way of knowing about the world alongside science?</p></li>
<li><p>how should we uphold this value in a way that respects intersections and differences?</p></li>
<li><p>should relevant content from mātauranga be taught in science classrooms?</p></li>
</ul>
<p>These questions and others are (at a bare minimum) starting points for more productive discussions than “is mātauranga science?” There is nothing constructive to be gained by framing those questions in ambiguous definitional terms.</p>
<p>In closing, we note the question “what is philosophy?” has no clear and easy answer, either! A favourite quotation about philosophy says it is “thinking in slow motion”. More of that would be welcome in the current discussion.</p>
<p>In practice, that will mean striving to avoid ambiguity in everything we say, pausing with respect to consider our audience’s point of view — and choosing our words carefully.</p><img src="https://counter.theconversation.com/content/165465/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Emily Parke has received funding from Marsden to explore mātauranga Māōri and science.</span></em></p><p class="fine-print"><em><span>Dan C H Hikuroa has received research funding from Marsden, MBIE, National Science Challenges, Ngā Pae o te Māramatanga, Te Pūnaha Matatini to explore mātauranga Māori and science.</span></em></p>Ambiguous language and a rush to judgment have defined the debate about mātauranga and science. It’s time to slow down and stop talking past each other.Emily Parke, Senior Lecturer in Philosophy, University of Auckland, Waipapa Taumata RauDan C H Hikuroa, Senior Lecturer, University of Auckland, Waipapa Taumata RauLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1374412020-05-04T15:08:38Z2020-05-04T15:08:38ZDon’t hold your breath for a COVID-19 vaccine in 2020<p>Donald Trump may be “<a href="https://www.theguardian.com/world/2020/may/04/trump-very-confident-of-covid-19-vaccine-in-2020-and-predicts-up-to-100000-us-deaths">very confident</a>” we will have a vaccine for COVID-19 by the end of the year, but the rest of us should be more cautious. Billions of dollars are being spent trying to develop vaccines and treatments as a more permanent solution to the crisis than the lockdowns currently being enforced around the world. </p>
<p>As of May 2020 there are <a href="https://milkeninstitute.org/Mobilizing-for-Action-in-the-Time-of-COVID-19">182 treatments and 99 different vaccines</a> being developed globally. But, based on recent history, only one or two are likely to be transformative, a couple may be partially helpful, some will be shown as downright dangerous, and the majority will have conflicting evidence as to their effectiveness. </p>
<p>This is because medical research is a slow and painstaking process. It is also very complicated and easy to come to the wrong conclusions. </p>
<h2>Trusting the experts</h2>
<p>One good thing to have come out of the coronavirus pandemic seems to be a renewed trust in experts. The routine presence of scientists at government briefings seems to recognise that rather than deserving our suspicion, we need these people to beat the virus. </p>
<p>But more trust in experts means more scrutiny of science as it happens – the latest <a href="https://www.bbc.co.uk/news/world-us-canada-52511270">studies showing promising results</a> are now headline news. This can be worrying because, while there is no doubt that treatments for COVID-19 will eventually be found, it is easy for enthusiasm to turn into cynicism if expectations are not met as quickly as the public and politicians may hope. </p>
<p>There seems to be little recognition that, while thousands of drugs have shown promise in early animal or clinical tests – for example, the <a href="https://www.nytimes.com/2020/04/27/world/europe/coronavirus-vaccine-update-oxford.html">vaccine trials</a> at the University of Oxford – the vast majority that show early promise will never make it into routine clinical use. On average it takes <a href="https://www.pharmaceutical-journal.com/publications/tomorrows-pharmacist/drug-development-the-journey-of-a-medicine-from-lab-to-shelf/20068196.article">12 years and over US$1 billion (£805 million)</a> to get a drug to market.</p>
<h2>Good research takes time</h2>
<p>I chair research ethics committees. Over the last few years I have reviewed thousands of research protocols representing the very best, and occasionally some quite poor, examples of medical research. </p>
<p>Good research is defined as rigorous and reliable, producing results that are not only interesting, but are practical, useful and in some cases transformative. They are also reported clearly, transparently and in the context of previous studies. This is precisely the type of research we need to address the COVID-19 crisis.</p>
<p>But such good research comes at a cost. Much of society thinks of cost in terms of dollars and pounds, and indeed mindful of our own survival, scientists and researchers are of course always going to lobby for more investment. While it is very helpful to have the funds to order any chemical that is needed, access highly specialised equipment, or pay others to conduct experiments and analyse results quickly, we must take care never to underestimate the importance of taking time to think carefully about what results actually mean. </p>
<p>It is only once researchers have taken the time to understand the context of results that they can start turning them into effective applications or treatments. The real cost of good research is therefore time. </p>
<p>The frustrating truth about medical research is that the majority of experiments appear not to work because the subject being studied is so horrendously complex. In fact, rather than “not working”, many experiments are simply inconclusive. To make progress you have to slow down, look at the evidence and take time to think very carefully about what the results might mean. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=417&fit=crop&dpr=1 600w, https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=417&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=417&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=525&fit=crop&dpr=1 754w, https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=525&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/332392/original/file-20200504-83736-1ngoxz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=525&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Positive results in animals often don’t translate to scientific breakthroughs for humans.</span>
<span class="attribution"><span class="source">From www.shutterstock.com</span></span>
</figcaption>
</figure>
<p>The thinking needed for this takes years. I was involved with one project that was delayed for almost ten years while the team tried to work out why a single animal showed cardiovascular complications. Another project I worked on showed promise reducing an <a href="https://www.nature.com/articles/417254a">Alzheimer-like pathology in mice</a>, yet 18 years later similar effects have yet to be conclusively shown in humans. Commendably, the team is still working on it.</p>
<p>The reality is that the long road to a vaccine or drug for any disease is littered with trials that did not lead to expected results. Even when a study is successful, it takes a long time to go from the lab to the general public.</p>
<h2>The pressure to find a cure</h2>
<p>One worrying aspect of the current situation is the pressure on researchers to work quickly and come up with solutions for COVID-19 almost immediately. For perhaps the first time, financial resources are not a limiting factor, and so politicians and the public are expecting researchers to take the cash and provide the answers. This has been coupled with significant pressure on regulators to streamline or <a href="https://www.nhsx.nhs.uk/covid-19-response/data-and-information-governance/information-governance/copi-notice-frequently-asked-questions/">even suspend</a> some of the normal processes so that treatments can get to the clinic as quickly as possible. </p>
<p>Lured by promises of unlimited funding, and perhaps fame should their chosen idea work, some researchers may be tempted to engage in questionable research practices. History shows that whenever a large amount of money is involved, the temptation to commit fraud, misconduct or other questionable practices increase. The UK spent more than £400 million during the 2009 swine flu outbreak stockpiling a drug whose effectiveness had been <a href="https://publications.parliament.uk/pa/cm201314/cmselect/cmpubacc/295/29503.htm">inflated by</a> the manufacturers due to publication bias – where negative or inconclusive results from a trial are not published in scientific journals, but positive results are. </p>
<p>Without appropriate scrutiny there is a real risk that ineffective, or even harmful, treatments begin to get used. This may be considered an acceptable risk in the current crisis, but if so, it is important that any new treatments are monitored very closely and withdrawn without hesitation if the harms mount up. </p>
<p>Given time – maybe two, three or perhaps even ten years – researchers will be able to take stock of the evidence from experiments and trials, perform a meta-analysis and systematic review, hold international conferences, and then, following careful thought, tell the world what the best treatment for COVID-19 is. </p>
<p>The world clearly needs scientific and medical answers to the current pandemic as soon as possible, but we need to recognise that initially we may only find partial or tentative answers. Instead of a quick vaccine that completely prevents COVID-19, a variety of partial successes will be combined until eventually a full solution is found. </p>
<p>There may even be some blind alleys with promising, but ultimately futile, treatment ideas. This is not a failure of research, or misuse of resources. Above all, researchers need to be supported to work with integrity, and not be made scapegoats for the challenges that undoubtedly lie ahead.</p><img src="https://counter.theconversation.com/content/137441/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Simon chairs research ethics committees for the NHS, the Ministry of Defence and Public Health England, and sits on the Department of Health’s Confidentiality Advisory Group.</span></em></p>Politicians are throwing billions of dollars at coronavirus vaccine trials, but the real cost of research is the one thing we’re lacking – time.Simon Kolstoe, Senior Lecturer in Evidence Based Healthcare and University Ethics Advisor, University of PortsmouthLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1346532020-04-09T12:08:30Z2020-04-09T12:08:30ZCoronavirus research done too fast is testing publishing safeguards, bad science is getting through<figure><img src="https://images.theconversation.com/files/326273/original/file-20200407-11299-1u50za7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Science is happening fast and mistakes are being made </span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/reaction-formula-of-metabolism-royalty-free-image/626830901?adppopup=true">Yagi Studio/ DigitalVision via Getty Images</a></span></figcaption></figure><p>It has been barely a few weeks since the coronavirus was <a href="https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020">declared a pandemic</a>. The pace at which the SARS-CoV-2 virus has spread across the globe is jolting, but equally impressive is the <a href="https://www.wsj.com/articles/inside-the-race-to-find-a-coronavirus-cure-11586189463?mod=hp_lead_pos5">speed at which scientists and clinicians have been fighting back</a>. </p>
<p>I am a <a href="https://pharmacyschool.usc.edu/irving-steinberg/">pharmacotherapy specialist</a> and have consulted on infectious disease treatments for decades. I am both exhilarated and worried as I watch the unprecedented pace and implementation of <a href="https://dx.doi.org/10.1021%2Facscentsci.0c00272">medical research currently being done</a>. Speed is, of course, important when a crisis such as COVID-19 is at hand. But speed – in research, the interpretation and the implementation of science – is a risky endeavor. </p>
<p>The faster science is published and implemented, the greater the chances it is unsound. Mix in the panic and stress of the current pandemic and it becomes harder to make sure the <a href="https://www.medscape.com/viewarticle/927557">right information is communicated and adopted correctly</a>. Finally, governing bodies such as the World Health Organization, politicians and the media act as sources of trustworthy messaging and policy making. Each step – research, interpretation, policy – has safeguards in place to make sure the right information is acquired, interpreted and implemented. But pace and panic are testing these safety measures like never before.</p>
<h2>Unprecedented pace</h2>
<p>The process of taking an idea from theory through testing and eventually toward implementation has been refined in modern times to make sure medical studies and publications are truthful and accurate. </p>
<p>Once research is completed, investigators analyze their results and write a manuscript. They then <a href="https://www.elsevier.com/connect/7-steps-to-publishing-in-a-scientific-journal">submit it to a journal, where it is reviewed</a> by experts in that field who assess whether the methods, analysis and conclusions are sound. If the paper is accepted, it is then further edited and published in a journal. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/326275/original/file-20200407-172365-o47kdk.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">People are looking to international and government agencies for guidance.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Virus-Outbreak-Trump/4a089043863b4a70a382d537c24d3956/12/0">AP Photo/Alex Brandon</a></span>
</figcaption>
</figure>
<p>From there, groups like the WHO, medical societies and government agencies evaluate this and other evidence-based information to decide whether to establish new recommendations or change previous ones. It normally takes from <a href="https://www.editage.com/insights/peer-review-process-and-editorial-decision-making-at-journals">several months to more than a year</a> to go from submission to publication. But the rush to publish during this pandemic has <a href="https://www.sciencemag.org/news/2020/02/completely-new-culture-doing-research-coronavirus-outbreak-changes-how-scientists">shortened the time from submission</a> to online publication to one to two weeks in numerous cases. </p>
<p>There has also been a <a href="https://www.sciencemag.org/news/2020/02/completely-new-culture-doing-research-coronavirus-outbreak-changes-how-scientists">huge increase in preprint publication</a> – publishing studies online before they are adequately peer-reviewed – and these are a good example of the risk that comes with the rapid release of data. </p>
<p>On March 17, French investigators posted a <a href="https://www.sciencedirect.com/science/article/pii/S0924857920300996">prepublication clinical paper online</a> touting the successful use of hydroxychloroquine in COVID-19 patients. Despite the media and government attention, the study was described by director of the National Institute of Allergy and Infectious Diseases <a href="https://www.nejm.org/doi/full/10.1056/NEJMe2002387">Anthony Fauci as “anecdotal”</a> due to the poor study design.</p>
<p>On April 3, the International Society of Antimicrobial Chemotherapy, the sponsoring organization of the very journal posting this prepublished article, <a href="https://www.isac.world/news-and-publications/official-isac-statement">agreed and stated</a> “….the article does not meet the Society’s expected standard,” and “Although ISAC recognises it is important to help the scientific community by publishing new data fast, this cannot be at the cost of reducing scientific scrutiny and best practices.” The <a href="https://www.nytimes.com/2020/04/06/us/politics/coronavirus-trump-malaria-drug.html">debate over the usefulness of hydroxychloroquine</a> will likely continue until well-designed trials are completed.</p>
<p>The deliberate steps of scientific investigation, followed by editorial scrutiny, are guardrails. When these are disrupted there is a real risk that policy organizations may make consequential mistakes in spite of good intent.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=414&fit=crop&dpr=1 600w, https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=414&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=414&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=520&fit=crop&dpr=1 754w, https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=520&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/326278/original/file-20200407-44994-ktl9zh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=520&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Almost daily, research is put out to the public on drugs to take or avoid because of the coronavirus. Much of it is very preliminary.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/pharmacy-royalty-free-image/1134353448?adppopup=true">Mint Images/Mint Images RF via Getty Images</a></span>
</figcaption>
</figure>
<h2>When pace meets with panic</h2>
<p>Nothing better illustrates how trusted institutions can make misinformed recommendations than the recent fiasco over ibuprofen. </p>
<p>The most common early symptom of COVID-19 is fever, and ibuprofen is one of the most widely used drugs in the world to treat fever. In <a href="https://www.thelancet.com/journals/lanres/article/PIIS2213-2600(20)30116-8/fulltext">a letter published in The Lancet Respiratory Medicine</a>, European researchers raised concerns that ibuprofen use could worsen COVID-19 symptoms. The idea is that since ibuprofen increases the quantity of ACE2 in human cells – the protein that the coronavirus uses to enter lung cells – the virus could infect lung cells more easily if a person was on ibuprofen. This was not a study nor did it present sufficient experimental evidence; it was simply a theoretical concern based on a mechanism.</p>
<p>Three days after the letter was published, the <a href="https://twitter.com/olivierveran/status/1238776545398923264">French health minister tweeted a message</a> urging people to avoid ibuprofen for coronavirus associated fever <a href="https://www.bmj.com/content/368/bmj.m1086">based on four “cited” cases</a> of people getting sicker after taking ibuprofen. These cases were never published in a journal. The French Health Ministry followed this with a <a href="https://dgs-urgent.sante.gouv.fr/dgsurgent/inter/detailsMessageBuilder.do?id=30500&cmd=visualiserMessage">broad ban on treating COVID-19 fever with</a> nonsteroidal anti-inflammatory drugs like ibuprofen. The <a href="https://www.sfgate.com/science/article/Should-you-take-ibuprofen-if-you-have-COVID-19-15143646.php">WHO tweeted an essentially similar warning</a>. The <a href="https://nypost.com/2020/03/17/4-year-olds-coronavirus-symptoms-worsen-after-taking-ibuprofen">media followed with more case anecdotes</a>, dubiously relating worsening early symptoms with ibuprofen use and referring to the letter as a “study,” adding to the confusion and fear.</p>
<p>The Lancet letter also hypothesized that two other drugs commonly used to treat hypertension and diabetes – ACE-inhibitors (ACE-I) and angiotensin receptor blockers (ARBs) – could be problematic in people with COVID-19. However, the mechanism they put forward was incompletely described and neglected that a protein these drugs promote can be <a href="https://academic.oup.com/eurheartj/advance-article/doi/10.1093/eurheartj/ehaa235/5810479">helpful in reducing inflammation and tissue damage</a> in the lungs and heart.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=587&fit=crop&dpr=1 600w, https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=587&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=587&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=738&fit=crop&dpr=1 754w, https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=738&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/326279/original/file-20200407-44994-xkq5s1.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=738&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The court of public discourse is always the last safeguard in science.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/illustration/couple-of-environmentalists-protesting-royalty-free-illustration/1214931674?adppopup=true">jemastock/ iStock / Getty Images Plus via Getty Images</a></span>
</figcaption>
</figure>
<h2>The response</h2>
<p>This letter to The Lancet slipped past the safeguards in research and institutional and media interpretation, but one of science’s oldest pastimes – definitively calling out the errors of others – reestablished patience and perspective. </p>
<p><a href="https://www.cnn.com/2020/03/16/health/coronavirus-ibuprofen-french-health-minister-scn-intl-scli/index.html">Clinicians and scientists pushed back swiftly</a>, supporting the use of ibuprofen in COVID-19 patients. The support was outlined in a <a href="https://ecancer.org/en/journal/article/1022-associations-between-immune-suppressive-and-stimulating-drugs-and-novel-covid-19-a-systematic-review-of-current-evidence">published literature review</a>. In response, the WHO <a href="https://www.sciencealert.com/who-recommends-to-avoid-taking-ibuprofen-for-covid-19-symptoms">quickly reversed its position on ibuprofen</a>. </p>
<p>There was a similar rapid response to the statements about ARBs. Within days, three prominent cardiology groups, including the American Heart Association, <a href="https://www.acc.org/latest-in-cardiology/articles/2020/03/17/08/59/hfsa-acc-aha-statement-addresses-concerns-re-using-raas-antagonists-in-covid-19">released a joint statement</a> urging practitioners not to discontinue ACE-I and ARBs in their patients. </p>
<p>The risk-benefit ratio is always a clinical factor for the use of any drug in any patient. But the risk must be more than theory for the use of a drug to be discontinued or any major policy change to be implemented.</p>
<h2>Some perspective</h2>
<p>As the coronavirus rampages across the U.S., it is incredibly important to know whether commonly used drugs like ibuprofen or ARBs are risky, neutral or of therapeutic potential. There are ways to find out quickly. Researchers can look for correlations between the use of ibuprofen or ARBs and more severe infections or deaths, for example. And standard clinical trials can, should and are being done. There are several studies currently underway testing the effect and risk of <a href="https://clinicaltrials.gov/ct2/results?cond=covid&term=ACE2&cntry=&state=&city=&dist=">ARBs for COVID-19 patients</a>. But until the science is finished, it is <a href="https://www.medscape.com/viewarticle/928155?nlid=134913_3901&src=wnl_newsalrt_200406_MSCPEDIT&uac=41901BN&impID=2337551&faf=1#vp_2">foolish and potentially dangerous</a> to flee from tested clinically important drugs. </p>
<p>Scientists and policymakers must take quick steps and avoid missteps. Proper scientific method and conduct of studies, carefully reviewed publications and cogent post-release interpretations are necessary safeguards that ensure the best and safest medicines are prescribed and provided. The pressure and desperation of the moment are forcing researchers and policymakers to be innovative and act quickly, but what is done should stay within the guiding concepts of medical research.</p>
<p>[<em>Get facts about coronavirus and the latest research.</em> <a href="https://theconversation.com/us/newsletters?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=upper-coronavirus-facts">Sign up for The Conversation’s newsletter.</a>]</p><img src="https://counter.theconversation.com/content/134653/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Irving Steinberg does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Researchers, scientific journals and health agencies are doing everything they can to speed up coronavirus research. The combination of pace and panic during this pandemic is causing mistakes.Irving Steinberg, Dean for Faculty, USC School of Pharmacy; Associate Professor of Clinical Pharmacy & Pediatrics, School of Pharmacy & Keck School of Medicine of USC; Director, Division of Pediatric Pharmacotherapy, Dept of Pediatrics, LAC+USC Medical Center, University of Southern CaliforniaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1258882019-10-31T18:36:11Z2019-10-31T18:36:11Z2019 Nobel Prize in Economics: the limits of the clinical trial method<p>The 2019 Nobel Prize in Economics was awarded to Esther Duflo, Abijit Banerjee and Michael Kremer for their work adapting the method of randomized control trials (RCTs) to the field of development. The jury believes this new type of experimentation has <a href="https://www.nobelprize.org/prizes/economic-sciences/">“considerably improved our ability to fight global poverty” and “transformed development economics”</a>. There are reasons to applaud this decision – not only is one of the three winners a woman, the prize recognizes the importance of economic development and of an empirical approach to fieldwork. </p>
<p>However, the validity and impact of the growing use of randomized control trials require scrutiny. Working from a <a href="https://hal-inalco.archives-ouvertes.fr/ird-02112849v1">July 2019 article</a>, we would like to reaffirm our reservations. While RCTs have many advantages, claiming to be able to use them for the entire gamut of development interventions is deeply problematic.</p>
<p>In an RCT, two groups are randomly selected from a homogeneous population: the first receives an “intervention” (medicine, grant, loan, training, etc.), while the second gets a “placebo” – either a different intervention or no intervention at all. After a certain time, the two groups are evaluated to compare the efficacy of the intervention or analyze two distinct approaches. While <a href="https://www.cgdev.org/publication/should-randomistas-continue-rule">controversial</a>, this method has been widely used in the medical field since the <a href="http://documents.worldbank.org/curated/en/174451494942048090/The-entry-of-randomized-assignment-into-the-social-sciences">mid-20th century</a>, and has since been applied to fields such as education, crime and tax reform, particularly in the United States in the ‘60s, ‘70s and ‘80s.</p>
<p>Over the last 15 years, randomized control trials have been applied to a new field: development aid policy. A vast range of interventions have been put to the test of “randomization”, especially in education (incentives aimed at reducing absenteeism among teachers, de-worming medicine designed to improve student attendance), health (water filters, mosquito nets, training or bonus systems for healthcare workers, free consultations, medical advice via text messages, etc.), financing (microcredit, micro-insurance, savings, financial education), and governance.</p>
<h2>A supposed monopoly on scientific diligence</h2>
<p>RCTs are described by their proponents as a revolutionary paradigm shift, as seen in the book by Esther Duflo and Abijit Banerjee, <a href="http://www.seuil.com/ouvrage/repenser-la-pauvrete-esther-duflo/9782021005547"><em>Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty</em></a> and in their <a href="https://olc.worldbank.org/content/state-economics-influence-randomized-controlled-trials-development-economics-research-and">public statements</a>. Moreover, those in the political and economic spheres tend to attribute the labels “rigorous” and even “scientific”, exclusively to these kinds of trials.</p>
<p>As randomized control trials have become increasingly dominant, they have had a crowding-out effect on other approaches. At the World Bank, from 2000 to 2010 just 20% of all evaluations were RCTs. In the five years that followed, the ratio was practically reversed, and this trend has been mirrored at <a href="https://www.3ieimpact.org/">3IE</a>, the international network specializing in evaluation.</p>
<p>Is this crowding-out effect scientifically legitimate and politically desirable?</p>
<h2>From theory to practice</h2>
<p>Be it of a project, policy or program, all impact evaluations face the same challenge: how can one isolate the impact of the intervention from changes arising from outside sources? Several methods are available but, in theory, RCTs have an indisputable advantage: randomized selection from large sample sizes, in principle and on average, ensures that all the differences measured between the two groups are due to the intervention, and nothing else.</p>
<p>When it comes to answering basic questions about development, however, RCTs are hardly effective for at least three reasons:</p>
<ul>
<li><p>Their <strong>external validity is weak</strong>, meaning they are extremely localized and rely on samples that do not represent the population as a whole. Their results are therefore difficult to generalize. With this method, it is impossible to know if results obtained in a rural area of Morocco would apply in another area of the country, or in Tunisia, or Bolivia for example. This limitation is widely acknowledged and accepted but, in practice, few take it into account.</p></li>
<li><p>Contrary to a common claim, the <strong>internal validity of RCTs is also problematic</strong>. Their capacity to measure the impact of an intervention is imperfect. As was demonstrated by the 2016 economics Nobel Prize laureate <a href="http://www.nber.org/papers/w22595">Angus Deaton and his epistemologist colleague, Nancy Cartwright</a>, RCTs are ill-equipped to strike the right balance between bias (which must be minimized) and precision (which must be maximized), and therefore tend to focus on <em>average</em> results for an entire given population. Yet, the impacts of the policies under study are often heterogeneous, and heterogeneity is decisive in public policy. Furthermore, the implementation of study protocols is hampered by numerous practical and ethical difficulties, to the extent that comparisons between the population receiving the intervention and the control population are often skewed.</p></li>
<li><p>RCTs often involve a <strong>range of stakeholders with interests that are sometimes in conflict</strong>. Their interplay influences every stage of the trial: the technical protocols, their implementation, the analysis of the results, and their publication and distribution. Here, again, arrangements are made to the detriment of scientific rigor. RCTs become political areas, with interests at play involving government re-election, (<a href="http://www.czech-in.org/ees/full_papers/37354.pdf">e.g. the evaluation of an anti-poverty program in Mexico</a>), the dominant discourse on certain development tools (<a href="https://hal-univ-paris10.archives-ouvertes.fr/hal-01640808">e.g. micro-insurance</a>) and their reputation, in some cases determined by RCT advocates (<a href="http://www.columbia.edu/%7Emh2245/w/worms.html">e.g. the controversy surrounding de-worming</a>) and, sometimes, the <a href="http://socioeco.hypotheses.org/3393">publishing imperatives</a> concerning research studies…</p></li>
</ul>
<h2>An example</h2>
<p>We recently replicated a randomized control trial conducted by Esther Duflo and her colleagues on <a href="https://www.iree.eu/publications/publications-in-iree/estimating-microcredit-impact-with-low-take-up-contamination-and-inconsistent-data-a-replication-study-of-crepon-devoto-duflo-and-pariente-american-economic-journal-applied-economics-2015/">microcredit in Morocco</a>. This kind of exercise is vital for ensuring research reliability, and consists of using the raw data from the survey to try to reproduce the study’s results. We were able to do so, which is good news, but we also uncovered numerous problems and errors, some of which seriously undermined the study’s internal and external validity.</p>
<ul>
<li><p>The sampling was so different from the original protocol that it was impossible to characterize the population studied and understand the representativeness of the results.</p></li>
<li><p>The gender and ages of the members of the households surveyed varied so widely before and after the intervention that, in 20% of cases, they could not possibly be the same households.</p></li>
<li><p>Estimations of the assets owned by the households were incoherent, in spite of the fact that these estimations are a key variable for evaluating the economic impact of the program.</p></li>
<li><p>Although the area of the survey was supposed to have been free of credit prior to the intervention and the control area was supposed to remain credit-free during the study, this was not the case.</p></li>
<li><p>The researchers arbitrarily decided to remove 27 households, with higher values on certain variables, from the dataset prior to analysis (0.5% of the total). If just 12 more or 12 fewer households had been removed (0.3% or 0.7% of the total), the results would have been completely different.</p></li>
</ul>
<p>Our replication gave rise to <a href="https://dial.ird.fr/publications/documents-de-travail-working-papers">discussions with Duflo and her colleagues</a> in the form of working documents. These discussions reveal profound divergences on what constitutes scientific validity for a field study. We believe our peers must examine this question more carefully.</p>
<h2>Behind the success of RCTs</h2>
<p>Ultimately, the <a href="https://ideas.repec.org/a/taf/jdevef/v4y2012i2p314-327.htm">kinds of interventions that can be evaluated using RCTs are limited</a>, amounting to just 5%, according to the <a href="https://assets.publishing.service.gov.uk/media/57a08a6740f0b6497400059e/DFIDWorkingPaper38.pdf">British development agency</a>. Restricting the scope of impact studies to those interventions likely to conform to the standards of randomization not only excludes many projects, but also numerous fundamental aspects of development, both economic and political, such as regulating large companies, tax and international trade, to name but a few.</p>
<p>So what lies behind the success of RCTs? It is not always the scientific superiority of a method or theory that explains its success but, rather, the ability of its advocates to <a href="https://books.openedition.org/pressesmines/1196?lang=fr">convince a sufficient number of stakeholders at a specific time</a>. In other words, success arises from both supply and demand. On the demand side, the success of RCTs relates to changes in economics as a discipline, including the recent emphasis on quantification, the micro origins of macro processes and, within these micro origins, the psychological and cognitive motivations behind individual behaviours.</p>
<p>The success of RCTs also illustrates changes in the area of development aid, where we are seeing increasing numbers of small projects aimed at <a href="https://www.zora.uzh.ch/id/eprint/125976/">correcting individual behaviours</a> rather than setting up or maintaining development infrastructure and national development policies.</p>
<p>The supply side has largely been shaped by a new brand of scientific entrepreneurs who use numerous strategies in an attempt to “corner the market”. These researchers are young and come from a <a href="https://www.jstor.org/stable/26491530?seq=1#page_scan_tab_contents">small group</a> of (mainly American) top universities. They have managed to combine the magic formula of academic excellence (scientific legitimacy), the ability to win over public opinion (media visibility, compassionate mobilization and moral commitment) and donors (solvent demand), massive investment in training (qualified supply), and an effective business model (financial profitability). All these qualities mutually reinforce each other.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/XpLo7gjLTZ0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Esther Duflo being interviewed by Europe 1 on 15 October 2019, the day after being awarded the Nobel Prize in Economics.</span></figcaption>
</figure>
<p>Applying RCTs to development could lead to scientific breakthroughs, as long as their (many) limits and (narrow) scope are acknowledged. Claiming to be able to solve poverty with this kind of method, as do some of its advocates, including the three Nobel laureates, is a step backward on two fronts: epistemological, since this claim demonstrates an outdated positivist view of science; and <a href="https://lantpritchett.org/rct/">political</a>, since questions central to understanding the fight against <a href="http://www.ras.org.in/index.php?Article=randomise_this_on_poor_economics&q=randomized&keys=randomized">poverty and inequality</a> aren’t addressed with this approach.</p>
<p>Will this award lead the randomizers of development to be more measured about the benefits of different methods or, on the contrary, will they use this opportunity to consolidate their largely dominant position? There are good reasons for concern.</p>
<hr>
<p><em>Florent Bédécarrats contributed to this article</em>.</p>
<p><em>Translated from the French by Alice Heathwood for <a href="http://www.fastforword.fr/en/">Fast ForWord</a></em>.</p><img src="https://counter.theconversation.com/content/125888/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d'une organisation qui pourrait tirer profit de cet article, et n'ont déclaré aucune autre affiliation que leur organisme de recherche.</span></em></p>The 2019 Nobel Prize in Economics pays tribute to randomized control trials, but can they really help us fight poverty?Isabelle Guérin, Directrice de recherche à l'IRD-Cessma, Membre de l'Institute of Advanced Study, Princeton (2019-2020), Institut de recherche pour le développement (IRD)François Roubaud, Économiste, statisticien, directeur de recherche à l’IRD et membre de l’UMR DIAL, Institut de recherche pour le développement (IRD)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1255682019-10-24T19:00:48Z2019-10-24T19:00:48ZPredicting research results can mean better science and better advice<figure><img src="https://images.theconversation.com/files/298477/original/file-20191024-31500-7c7lpw.jpg?ixlib=rb-1.1.0&rect=50%2C0%2C5635%2C3753&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Putting scientific results under the microscope before they are even collected could help improve science as a whole.</span> <span class="attribution"><span class="source">Konstantin Kolosov/Shutterstock</span></span></figcaption></figure><p>We ask experts for advice all the time. A company might ask an economist for advice on how to motivate its employees. A government might ask what the effect of a policy reform will be.</p>
<p>To give the advice, experts often would like to draw on the results of an experiment. But they don’t always have relevant experimental evidence.</p>
<p>Collecting expert predictions about research results could be a powerful new tool to help improve science - and the advice scientists give.</p>
<h2>Better science</h2>
<p>In the past few decades, academic rigour and transparency, particularly in the social sciences, have greatly improved. </p>
<p>Yet, as Australia’s Chief Scientist Alan Finkel <a href="https://theconversation.com/there-is-a-problem-australias-top-scientist-alan-finkel-pushes-to-eradicate-bad-science-123374">recently argued</a>, there is still much to be done to minimise “bad science”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/there-is-a-problem-australias-top-scientist-alan-finkel-pushes-to-eradicate-bad-science-123374">'There is a problem': Australia's top scientist Alan Finkel pushes to eradicate bad science</a>
</strong>
</em>
</p>
<hr>
<p>He recommends changes to the way research is measured and funded. Another increasingly <a href="https://www.sciencemag.org/news/2018/09/more-and-more-scientists-are-preregistering-their-studies-should-you">common approach</a> is to conduct randomised controlled trials and pre-register studies to avoid bias in which results are reported.</p>
<p>Expert predictions can be yet another tool for making research stronger, as my co-authors Stefano DellaVigna, Devin Pope and I argue in a new article published in <a href="https://science.sciencemag.org/content/366/6464/428">Science</a>.</p>
<h2>Why predictions?</h2>
<p>The way we interpret research results depends on what we already believe. For example, if we saw a study claiming to show that smoking was healthy, we would probably be pretty sceptical.</p>
<p>If a result surprises experts, that fact itself is informative. It could suggest that something may have been wrong with the study design. </p>
<p>Or, if the study was well-designed and the finding replicated, we might think that result fundamentally changed our understanding of how the world works.</p>
<p>Yet currently researchers rarely collect information that would allow them to compare their results with what the research community believed beforehand. This makes it hard to interpret the novelty and importance of a result.</p>
<p>The academic publication process is also plagued by bias against publishing insignificant, or “null”, results. </p>
<p>The collection of advance forecasts of research results could combat this bias by making null results more interesting, as they may indicate a departure from accepted wisdom.</p>
<h2>Changing minds</h2>
<p>As well as directly improving the interpretation of research results, collecting advance forecasts can help us understand how people change their minds.</p>
<p>For example, my colleague Aidan Coville and I <a href="http://evavivalt.com/wp-content/uploads/How-Do-Policymakers-Update1.pdf">collected advance forecasts from policymakers</a> to study what effect academic research results had on their beliefs. We found in general they were more receptive to “good news” than “bad news” and ignored uncertainties in results. </p>
<p>Forecasts can also inform us as to which potential studies could most improve policy decisions.</p>
<p>For example, suppose a research team has to pick one of ten interventions to study. For some of the interventions, we are pretty sure what a study would find, and a new study would be unlikely to change our minds. For others, we are less sure, but they are unlikely to be the best intervention.</p>
<p>If predictions were collected in advance, they could tell us which intervention to study to have the biggest policy impact.</p>
<h2>Testing forecasts</h2>
<p>In the long run, if expert forecasts can be shown to be fairly accurate,
they could provide some support for policy decisions where rigorous studies can’t be conducted.</p>
<p>For example, Stefano DellaVigna and Devin Pope <a href="https://www.journals.uchicago.edu/doi/10.1086/699976?mobileUi=0&">collected forecasts</a> about how different incentives change the amount of effort people put into completing a task.</p>
<p>As you can see in the graph below, the forecasts were not perfect (a dot on the dashed diagonal line would represent a perfect match of forecast and result). But there does appear to be some correlation between the aggregated forecasts and the results.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=436&fit=crop&dpr=1 600w, https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=436&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=436&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=548&fit=crop&dpr=1 754w, https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=548&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/298204/original/file-20191022-55674-1kr6ucb.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=548&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Reproduced with permission from DellaVigna and Pope.</span>
</figcaption>
</figure>
<h2>A central place for forecasts</h2>
<p>To make the most of forecasts of research results, they should be collected systematically. </p>
<p>Over time, this would help us assess how accurate individual forecasters are, teach us how best to aggregate forecasts, and tell us which types of results tend to be well predicted. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-is-best-when-the-data-is-an-open-book-49147">Science is best when the data is an open book</a>
</strong>
</em>
</p>
<hr>
<p>We built a platform that researchers can use to collect forecasts about their experiments from researchers, policymakers, practitioners, and other important audiences. The beta website can be viewed <a href="http://socialscienceprediction.org/">here</a>. </p>
<p>While we are focusing first on our own discipline – economics – we think such a tool should be broadly useful. We would encourage researchers in any academic field to consider collecting predictions of research results.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=341&fit=crop&dpr=1 600w, https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=341&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=341&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=428&fit=crop&dpr=1 754w, https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=428&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/298205/original/file-20191022-55665-xrz1q2.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=428&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The Social Science Prediction Platform, https://socialscienceprediction.org/.</span>
</figcaption>
</figure>
<p>There are many potential uses for predictions of research results beyond those described here. Many other academics are also exploring this area, such as the <a href="https://www.replicationmarkets.com/">Replication Markets</a> and <a href="https://replicats.research.unimelb.edu.au/">repliCATS</a> projects that are part of a large <a href="https://www.darpa.mil/program/systematizing-confidence-in-open-research-and-evidence">research initiative</a> on replication. </p>
<p>The multiple possible uses of research forecasts gives us confidence that a more rigorous and systematic treatment of prior beliefs can greatly improve the interpretation of research results and ultimately improve the way we do science.</p><img src="https://counter.theconversation.com/content/125568/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eva Vivalt receives funding from the Alfred P. Sloan Foundation and the John Mitchell Economics of Poverty Lab. </span></em></p>Researchers rarely collect information that lets them to compare their results with what was believed beforehand. If they did, it could help spot new or important findings more readily.Eva Vivalt, Research Fellow and Lecturer, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1240762019-09-24T20:14:18Z2019-09-24T20:14:18ZReal problem, wrong solution: why the Nationals shouldn’t politicise the science replication crisis<p>The <a href="https://campusmorningmail.com.au/news/national-party-wants-independent-agency-to-vet-research/">National Party</a>, Queensland farming lobby group <a href="https://www.abc.net.au/7.30/farmers-fight-tough-new-rules-to-protect-the-great/11526168">AgForce</a>, and MP <a href="https://www.bobkatter.com.au/media/media-releases/view/1029/katter-demands-govt-audit-reef-quality-science-/media-releases">Bob Katter</a> have banded together to propose an “independent science quality assurance agency”.</p>
<p>To justify their position, Liberal-National MP George Christensen and AgForce’s Michael Guerin specifically invoked the “replication crisis” in science, in which researchers in various fields have found it difficult or impossible to reproduce and validate original research findings. Their proposal, however, is not a good solution to the problem. </p>
<p>The more important context is that these politicians and lobbyists are opposed to <a href="https://www.qld.gov.au/environment/agriculture/sustainable-farming/reef/reef-regulations/strengthening-regulations">new laws</a> to curb agricultural runoff onto the Great Barrier Reef that are underpinned by research finding evidence of <a href="https://theconversation.com/cloudy-issue-we-need-to-fix-the-barrier-reefs-murky-waters-39380">harm from poor water quality</a>. Christensen <a href="https://www.facebook.com/gchristensenmp/photos/a.769408183114112/2334140303307551/?type=3&theater">suggests</a> that many scientific papers behind such regulation “have never been tested and their conclusions may be wrong”. But Christensen seems to be targeting specific results he doesn’t like, rather than trying to improve scientific practice in a systematic way.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-is-in-a-reproducibility-crisis-how-do-we-resolve-it-16998">Science is in a reproducibility crisis – how do we resolve it?</a>
</strong>
</em>
</p>
<hr>
<p>In various scientific areas, including psychology and preclinical medicine, <a href="https://www.nature.com/articles/d41586-018-06075-z">large-scale replication projects</a> have failed to reproduce the findings of many original studies. The rates of success differ between fields, but on average only <a href="https://cos.io/about/news/28-classic-and-contemporary-psychology-findings-replicated-more-60-laboratories-each-across-three-dozen-nations-and-territories/">half</a> <a href="https://www.castoredc.com/blog/replication-crisis-medical-research">or</a> <a href="https://www.nature.com/news/over-half-of-psychology-studies-fail-reproducibility-test-1.18248">fewer</a> of published studies were successfully replicated. Clearly there is a problem.</p>
<p>Much of the problem is due to hyper-competitiveness in science, funding shortfalls, publication practices, and the use of performance metrics that privilege quantity over quality. </p>
<p>Scientists themselves have <a href="https://theconversation.com/there-is-a-problem-australias-top-scientist-alan-finkel-pushes-to-eradicate-bad-science-123374">documented the poor practices</a> that underlie this crisis, such as the <a href="https://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421">misuse of statistics</a>, often unwittingly, in ways that bias findings towards attention-grabbing conclusions. These practices distort the evidence available to policy-makers and other researchers. </p>
<p>Scientists have also already produced responses to some problems: <a href="https://www.nature.com/articles/d41586-019-02674-6">reforms in peer review</a>, <a href="https://cos.io/top/">guidelines for methods and statistical reporting</a>, and <a href="https://osf.io/dashboard">new platforms for data sharing</a>. These improvements are possible only by taking the replication crisis seriously. Paying lip service to it so as to attack particular legislation is the opposite of this.</p>
<h2>Making decisions under uncertainty</h2>
<p>Establishing an agency with a mission to adjudicate on hand-picked scientific results would make things worse. </p>
<p>At best, such an agency will be one more review panel. At worst, it will be a bureaucratic front for the political agenda of the day. Either way, it will make scientists even more cautious, and delay the flow of information to policy-makers.</p>
<p>The track records of the lobbyists involved in this latest move suggests that they have little genuine interest in improving science. AgForce reportedly <a href="https://www.theguardian.com/australia-news/2019/may/02/agforce-deletes-decades-worth-of-data-from-government-funded-barrier-reef-program">deleted more than a decade’s worth of data meant for a government water quality program</a> in advance of the new runoff regulations taking effect.</p>
<p>Exploiting scientific uncertainty has long been a classic tactic of industry lobbyists. It has been used to justify inaction on everything from <a href="https://www.merchantsofdoubt.org/">tobacco</a> to <a href="https://www.thegwpf.com/donna-laframboise-peer-review-why-skepticism-is-essential/">climate change</a>. Local politicians and lobby groups seem to be copying moves from a well-worn overseas playbook in their misuse of the replication crisis.</p>
<p>Scientists can never make pronouncements with the certainty of a politician. But if, as a society, we want to benefit fully from science, we need to accept the idea of scientific uncertainty. The existence of uncertainties does not justify rejection of the best available evidence.</p>
<h2>To defend science we need to improve it</h2>
<p>It is tempting to respond to politically motivated attacks on science by simply pointing to the excellent track record of scientific knowledge, or the good intentions of the vast majority of scientists. </p>
<p>But there is a better reason: scientists themselves have been improving science. As advocates of reform, we have been told that pointing out problems helps the anti-science movement. We disagree: being open about our work to improve science is essential for building public trust.</p>
<p>Science is something that humans do. It is self-correcting when, and only when, scientists <a href="https://twitter.com/jamesheathers/status/845696144999137280">correct it</a>. Research is hard work, and we can’t expect scientists never to make errors or to provide complete certainty. But we can expect scientists to create a culture that values detecting and correcting errors.</p>
<p>Admitting errors in one’s own work, finding them in others’ work, reporting them, retracting results when necessary, and correcting the record are activities that should be the most highly regarded of scientific practices. We need to shift the balance of rewards away from rewarding only groundbreaking discoveries, and towards the painstaking work of confirmation.</p>
<p>A cultural shift in this regard is already underway, to better align <a href="https://www.abc.net.au/radionational/programs/bigideas/sharing-science-%E2%80%93-for-the-good-of-all/11330816">scientific practices with scientific values</a>. But there is more to be done, and governments can help.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/scientific-data-should-be-shared-an-open-letter-to-the-arc-9458">Scientific data should be shared: an open letter to the ARC</a>
</strong>
</em>
</p>
<hr>
<p>There are sensible policies to support the open science initiatives that will reduce error production and increase error detection in scientific work. Different fields need different approaches, but here are two ideas.</p>
<p>First, improve funding allocation procedures. Reward self-correcting activities such as replication studies. Don’t require every piece of funded research to be groundbreaking. Don’t rely on flawed metrics. Enforce best-practice data management and open data practices whenever feasible. This can all be done without establishing an inefficient agency whose likely effect is to delay action.</p>
<p>Second, <a href="https://theconversation.com/from-fraud-to-fair-play-australia-must-support-research-integrity-15733">establish a national independent office of research integrity</a> to allow errors in the scientific literature, whether deliberate or accidental, to be corrected in a fair, efficient, and systematic way. Unlike the politicians’ proposal, this would improve the process for all researchers, not just act as a handbrake on research findings that lobbyists don’t like.</p><img src="https://counter.theconversation.com/content/124076/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Martin Bush receives funding from DARPA (US Defense) for a project under the SCORE program, about predicting the likelihood of replication of published studies in social science.</span></em></p><p class="fine-print"><em><span>Alex O. Holcombe has received funding from the Australian Research Council. </span></em></p><p class="fine-print"><em><span>Bonnie Wintle receives funding from a University of Melbourne Research Fellowship (Career Interruptions). She also receives funding from DARPA (US Defense) for a project under the SCORE program, about predicting the likelihood of replication of published studies in social science. </span></em></p><p class="fine-print"><em><span>Fiona Fidler receives funding from the ARC, including a current Future Fellowship about replicability and reproducibility in ecology and environmental science. She also has funding from DARPA (US Defense) for a project under the SCORE program, about predicting the likelihood of replication of published studies in social science.</span></em></p><p class="fine-print"><em><span>Simine Vazire receives funding from the National Science Foundation (USA) and the Templeton Foundation. </span></em></p>Across science, only around half of published results can be successfully replicated. But while this is a serious problem, the proposed public audit looks like a political bid to cast doubt on science.Martin Bush, Research Fellow in History and Philosophy of Science, The University of MelbourneAlex O. Holcombe, Professor, School of Psychology, University of SydneyBonnie Claire Wintle, Research fellow, The University of MelbourneFiona Fidler, Associate Professor, School of Historical and Philosophical Studies, The University of MelbourneSimine Vazire, Professor, University of California, DavisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1224732019-08-28T20:03:17Z2019-08-28T20:03:17ZThere’s no evidence caesarean sections cause autism or ADHD<figure><img src="https://images.theconversation.com/files/289753/original/file-20190828-184222-12ywzar.jpg?ixlib=rb-1.1.0&rect=35%2C53%2C3958%2C2940&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Caesarean delivery alone does not contribute to the odds of a child developing autism or ADHD.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/5zp0jym2w9M">Aditya Romansa</a></span></figcaption></figure><p>A <a href="https://doi.org/10.1001/jamanetworkopen.2019.10236">new study</a> that combines data from over 20 million births has found that a caesarean section delivery is associated with autism spectrum disorder (autism) and attention-deficit hyperactivity disorder (ADHD). </p>
<p>However, the study does not indicate that caesarean section deliveries cause autism or ADHD. The truth is much more difficult to decipher, and provides an excellent case study for the old adage that correlation doesn’t equal causation. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/clearing-up-confusion-between-correlation-and-causation-30761">Clearing up confusion between correlation and causation</a>
</strong>
</em>
</p>
<hr>
<h2>Remind me, what are these disorders?</h2>
<p>Autism and ADHD are what we call neurodevelopmental disorders. This means they have clear differences in behavioural development, which we presume are due to brain differences. </p>
<p>In the case of autism, behavioural differences occur in the part of the brain primarily responsible for social and communication development. For ADHD, these differences affect the ability to control and direct attention. </p>
<p>The exact reasons why the brain develops differently are not entirely clear. Studies in twins, which are able to help us understand the role of genetic and environmental influences on a given trait, have shown that both autism and ADHD involve a large genetic component. </p>
<p>However, these studies have also indicated that environmental influences, such as <a href="https://theconversation.com/what-causes-autism-what-we-know-dont-know-and-suspect-53977">bacterial or viral infections</a> during pregnancy, may play a role in the development of these conditions, most likely through interactions with genetic make-up.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-causes-autism-what-we-know-dont-know-and-suspect-53977">What causes autism? What we know, don’t know and suspect</a>
</strong>
</em>
</p>
<hr>
<h2>What did this study find?</h2>
<p>The association between certain caesarean sections and autism has been <a href="https://jamanetwork.com/journals/jamapsychiatry/article-abstract/482014">known for close to two decades</a>. Any possible link with ADHD has received comparatively less research, but there have still be numerous studies in this area. </p>
<p>Today’s study, <a href="https://doi.org/10.1001/jamanetworkopen.2019.10236">published in the journal JAMA Network Open</a>, combines all of the studies conducted previously into a single analysis. This “meta-analysis” then allows the researchers to come up with a single estimate of how strong the association between caesarean sections, autism and ADHD may be. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/289770/original/file-20190828-184202-1bdtllq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The researchers were looking for a pattern that warrants further investigation.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/xTedodxYTuQ">freestocks.org</a></span>
</figcaption>
</figure>
<p>In this case, the meta-analysis included over 20 million people. It found children born via caesarean section had an increase in odds of being diagnosed with autism or ADHD in early childhood. </p>
<p>The associations were scientifically robust, but very small. Children delivered via caesarean section were 1.33 times more likely to be diagnosed with autism and 1.17 times more likely to be diagnosed with ADHD.</p>
<p>When the prevalence of these conditions is already relatively low (around <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5919599/">1%</a> for autism, and <a href="https://pediatrics.aappublications.org/content/135/4/e994.short">7%</a> for ADHD), this increase in odds is not substantial. In the instance of autism, this is a shift in odds from a 1% prevalence to 1.33%. This shift is not consequential and certainly does not call for any change in our clinical practice.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-you-need-to-know-to-understand-risk-estimates-67643">What you need to know to understand risk estimates</a>
</strong>
</em>
</p>
<hr>
<p>This association was similar for children born by either elective or emergency caesarean section. </p>
<h2>But what does it mean?</h2>
<p>The temptation with findings like this is to draw a causal link between one factor (caesarean section) and the other (autism or ADHD). Unlike so many other areas of science, the conclusions are easily understood and the implications appear obvious. </p>
<p>But the simplicity is deceptive, and says more about our desire for simple answers than it does about the truth of the science. </p>
<p>The studies included in this meta-analysis used a branch of science called epidemiology, which is concerned with how often conditions and diseases occur in different groups of people and why, and how to prevent or manage them.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/epidemiology-and-the-media-15972">Epidemiology and the media </a>
</strong>
</em>
</p>
<hr>
<p>Epidemiological studies survey a large population and find a pattern of results that indicate a certain factor may be coinciding with a certain disease more often than we would expect by chance. </p>
<p>In this case, there is the observation that people with autism or ADHD are more likely to be born by caesarean section than we would otherwise typically expect. </p>
<p>But this kind of epidemiological study is unable determine if one factor (caesarean section) causes another (ADHD or autism). </p>
<p>There are two key reasons why. </p>
<p>First, we can’t rule out that a third factor may be influencing this association. We know, for example, that caesarean sections are more common for pregnant women who are <a href="https://www.sciencedirect.com/science/article/abs/pii/S0002937803019240">obese</a> and <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1523-536X.2010.00409.x">older</a>, and who have a <a href="https://www.tandfonline.com/doi/abs/10.3109/14767058.2013.847080">history of immune conditions</a> such as asthma. </p>
<p>All of these factors have also been linked with an increased chance of having a child with autism, and it is entirely possible – and some would argue, <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/jcpp.12351">probable</a> – that it is more likely these factors underlie the relationship between caesarean section and neurodevelopmental disorders. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/289762/original/file-20190828-184196-e2ia6f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The link might be due to other factors such as the mother’s age or weight.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/I0ItPtIsVEE">Christian Bowen</a></span>
</figcaption>
</figure>
<p>The second reason is that these kind of epidemiological studies are unable to provide what scientists call “a mechanism” – that is, a biological explanation as to why this association may exist. </p>
<p>A mechanism study in this area may be to explore biological differences in newborns either born via vaginal or caesarean delivery, and understand how these differences may lead to atypical behavioural development.</p>
<p>Without a strong body of evidence from these kinds of studies, there is simply no scientific basis for concluding a causal link between caesarean section and neurodevelopmental disorders. </p>
<h2>So what should we take away from this study?</h2>
<p>The study provides a strong basis for concluding there is a statistical association between caesarean section delivery on one hand, and autism and ADHD on the other. But that’s about it. </p>
<p>Why this link exists remains unknown, but it is almost certain that a caesarean delivery alone does not contribute to the odds of a child developing autism or ADHD. </p>
<p>Instead, it is likely that other pregnancy factors play a role in this relationship, as well as genetic factors that may interact with the environmental influences during pregnancy to contribute to brain development. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/adhd-prescriptions-are-going-up-but-that-doesnt-mean-were-over-medicating-108474">ADHD prescriptions are going up, but that doesn't mean we're over-medicating</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/122473/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Andrew Whitehouse receives funding from the National Health and Medical Research Council, the Australian Research Council and the Autism CRC.</span></em></p>A new study has found a link between being born by caesarean section and having a greater chance of being diagnosed with autism or ADHD. But there’s no evidence caesarean sections cause them.Andrew Whitehouse, Bennett Chair of Autism, Telethon Kids Institute, Univeristy of Western Australia, The University of Western AustraliaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1149882019-04-22T20:23:59Z2019-04-22T20:23:59ZYou look but do not find: why the absence of evidence can be a useful thing<figure><img src="https://images.theconversation.com/files/269736/original/file-20190417-139120-14habdn.jpg?ixlib=rb-1.1.0&rect=186%2C0%2C4966%2C3264&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What you find depends on what you're looking for.</span> <span class="attribution"><span class="source">Ray Bond/Shutterstock</span></span></figcaption></figure><p>Imagine you’re looking for your keys and you think you might have left them on the bookshelf. But when you look, you see nothing but books. A natural conclusion to draw is that the keys are not there.</p>
<p>Now imagine you’re an early 20th century astrophysicist seeking to test the <a href="https://www.britannica.com/biography/Urbain-Jean-Joseph-Le-Verrier">hypothesis</a> that there is a planet (Vulcan) causing perturbations in Mercury’s orbit. You keep looking but find nothing. You conclude that Vulcan does not exist.</p>
<p>Both arguments seem straightforward, and yet in both cases you are relying on an assumption that an absence of evidence can be a good reason for inferring that what you are looking for is just not there. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/sorry-mr-spock-science-and-emotion-are-not-only-compatible-theyre-inseparable-94034">Sorry Mr Spock: science and emotion are not only compatible, they're inseparable</a>
</strong>
</em>
</p>
<hr>
<p>In other words, an absence of evidence is evidence of absence.</p>
<p>But it’s the opposite assumption — that an absence of evidence is not evidence of absence — that has come to have the status of a received truth.</p>
<h2>Of gods and aliens</h2>
<p>Consider the recent pronouncement by the 2019 <a href="http://www.templetonprize.org">Templeton Prize</a> winner, the US-based physicist <a href="https://marcelogleiser.com/">Marcelo Gleiser</a>, that <a href="https://www.scientificamerican.com/article/atheism-is-inconsistent-with-the-scientific-method-prizewinning-physicist-says/">atheism is inconsistent with the scientific method</a>. </p>
<p>Endeavouring perhaps to put a catechism among the dogmatists, Gleiser reasons that atheists are unscientific precisely because they assume that an absence of evidence (of God’s existence) is evidence of an absence (of God).</p>
<p>That, <a href="https://www.scientificamerican.com/article/atheism-is-inconsistent-with-the-scientific-method-prizewinning-physicist-says/">he asserts</a>, is contrary to the scientific method. Absence of evidence is not evidence of an absence and science abhors a dogmatist.</p>
<p>Gleiser is in interesting company. British astrophysicist Martin Rees, in his 2011 book <a href="https://www.goodreads.com/book/show/13616993-from-here-to-infinity">From Here to Infinity: Scientific Horizons</a>, used the slogan to suggest the possibility of an undiscovered, super-intelligent animal species on Earth and extraterrestrial intelligence elsewhere in the universe. <a href="https://books.google.com.au/books?redir_esc=y&id=1C0ckgAACAAJ&q=%22absence+of+evidence%22">He wrote</a>:</p>
<blockquote>
<p>There may be a lot more out there than we could ever detect. Absence of evidence wouldn’t be evidence of absence.</p>
</blockquote>
<p>Rees was the <a href="http://www.templetonprize.org/previouswinner.html#rees">2011 Templeton prize winner</a> and past president of the <a href="https://royalsociety.org/">Royal Society</a>, the oldest, independent academy of science whose luminaries include Isaac Newton, Robert Boyle, Charles Darwin, Albert Einstein and Stephen Hawking.</p>
<h2>Communist connections?</h2>
<p>During the Cold War, US Senator Joseph McCarthy reportedly justified naming someone as a communist, despite a complete lack of evidence, <a href="https://books.google.com.au/books?id=h04T6e77NsMC&lpg=PA132&vq=%22nothing%20in%20the%20files%20to%20disprove%20his%20Communist%20connections%22&pg=PA132#v=snippet&q=%22nothing%20in%20the%20files%20to%20disprove%20his%20Communist%20connections%22&f=false">on the grounds that</a>:</p>
<blockquote>
<p>I do not have much information on this except the general statement of the agency [unidentified] that there is nothing in the files to disprove his Communist connections.</p>
</blockquote>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/269906/original/file-20190418-139104-1pqtzjb.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Former US Defense Secretary Donald Rumsfeld receiving the Defender of the Constitution Award in 2011.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/gageskidmore/5446795136/">Flickr/Gage Skidmore</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>At a NATO press conference in 2002, the then <a href="https://www.nato.int/docu/speech/2002/s020606g.htm">US defense secretary Donald Rumsfeld declared</a> the war in Iraq justified on the grounds that although there was no evidence Iraq had weapons of mass destruction (WMDs):</p>
<blockquote>
<p>Simply because you do not have evidence that something exists does not mean that you have evidence that it doesn’t exist. </p>
</blockquote>
<p>A hidden God? Extraterrestrials? Communists? WMDs? If this is where the slogan “absence of evidence is not evidence of absence” leads, why would anyone find it compelling?</p>
<p>The slogan sounds like a cautionary tale – a healthy dose of scepticism to ward off the pox of hasty inferences drawn from a paucity of evidence. But trouble brews when cautionary tales get deployed as indisputable methodological principles.</p>
<h2>Do fish feel pain?</h2>
<p>Consider, for example, how the slogan is used against the following (abbreviated) absence of evidence argument:</p>
<blockquote>
<p>Animals that feel pain possess the neural circuitry enabling them to execute the neural computations that lead to pain. There is no evidence that fish possess such circuitry. Hence, fish don’t feel pain.</p>
</blockquote>
<p>Evidence purported to support the argument that fish feel pain has been <a href="https://link.springer.com/article/10.1007/s10539-014-9469-4" title="Fish do not feel pain and its implications for understanding phenomenal consciousness">strongly</a> <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/faf.12010" title="Can fish really feel pain?">discredited</a> by neuroscientists but widely ignored primarily because of <a href="https://animalstudiesrepository.org/animsent/vol1/iss3/28/" title="Anthropomorphic denial of fish pain">the false belief that</a> “incompleteness of current knowledge certainly does not constitute evidence for inferring that fish in particular do not feel pain”.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/269907/original/file-20190418-139120-1t7lw9k.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Hooked!</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/143733090@N05/25527020868/">Flickr/matt dean</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>But as far as science can tell, the <a href="https://animalstudiesrepository.org/animsent/vol1/iss3/1/" title="Why fish do not feel pain">hardware within the fish brain</a> is simply insufficient to perform the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6088194/" title="Designing Brains for Pain: Human to Mollusc">neural computations necessary</a> for a nervous system to be consciously aware of its own inner processes, that is, for it to feel pain.</p>
<p>That’s the best we can say (so far) and that’s <a href="https://theconversation.com/wheres-the-proof-in-science-there-is-none-30570">the way science works</a>. We have found no evidence that a fish can feel pain, so in this case, we should feel confident that an absence of evidence <em>is</em> evidence of absence.</p>
<h2>When finding nothing tells you something</h2>
<p>As with the keys and Vulcan arguments at the beginning, we are warranted to infer an absence from an absence of evidence in certain contexts.</p>
<p>What kinds of contexts are those? The kinds of contexts where we could reasonably expect to find evidence if our hypothesis were true, where our methodology is sound, and where we do not obtain positive results.</p>
<p>If the hypothesis that fish feel pain were true, we could reasonably expect to find evidence of something without which pain in vertebrates does not occur. But in the case of fish, we do not find this evidence.</p>
<p>Critics of the fish argument assume that by deploying the slogan an absence of evidence is not evidence of absence, they have discharged the burden of proof. They insist that the proponent of the fish argument must positively rule out the possibility that in some unknown or yet-to-be-identified region of the fish brain produces pain. </p>
<p>This is not how the burden of proof works.</p>
<p>If you doubt that Iraq had WMDs (because there was no evidence it did), you do not have the burden of proving that you are right. Nor do you have the burden of disproving that super-intelligent terrestrials or extraterrestrials exist.</p>
<p>The burden rests with those who claim that such things are probable enough to be live options.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/wheres-the-proof-in-science-there-is-none-30570">Where's the proof in science? There is none</a>
</strong>
</em>
</p>
<hr>
<p>Similarly, if you accept that fish lack the capacity to feel pain, why not task the doubters to <a href="https://animalstudiesrepository.org/animsent/vol1/iss3/44/" title="Burden of proof lies with proposer of celestial teapot hypothesis">prove that fish feel pain</a>?</p>
<p>Ordinary hypothesis testing, revision and replacement – the very falsifiability of scientific hypotheses – depends on being able to assume that in certain contexts of inquiry an absence of evidence can serve as evidence of absence.</p>
<p>What science eschews is not a role for negative findings but the reliance on slogans of any stripe parading as received truths.</p><img src="https://counter.theconversation.com/content/114988/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Deborah Brown receives funding from the Australian Research Council.</span></em></p><p class="fine-print"><em><span>Brian Key has received funding from NHMRC and ARC. </span></em></p>Some people argue the absence of evidence is not evidence of absence, you just need to keep looking. But there are occasions where finding no evidence is all you can do.Deborah Brown, Professor in Philosophy, The University of QueenslandBrian Key, Professor and Head of Brain Growth and Regeneration Lab, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1037362019-04-08T10:44:52Z2019-04-08T10:44:52ZThe replication crisis is good for science<figure><img src="https://images.theconversation.com/files/258873/original/file-20190213-181604-48h9rx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some studies don't hold up to added scrutiny. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/magnifying-glass-pen-over-graph-on-169007078?src=ZXM3US3bRC1l_2YUkqKtLQ-1-37">PORTRAIT IMAGES ASIA BY NONWARIT/shutterstock.com</a></span></figcaption></figure><p>Science is in the midst of a crisis: A surprising fraction of published studies fail to replicate when the procedures are repeated. </p>
<p>For example, take the study, published in 2007, that claimed that tricky math problems requiring careful thought <a href="https://doi.org/10.1037/0096-3445.136.4.569">are easier to solve when presented in a fuzzy font</a>. When researchers found in a small study that using a fuzzy font improved performance accuracy, it supported a claim that encountering perceptual challenges could induce people to reflect more carefully.</p>
<p>However, <a href="https://doi.org/10.1037/xge0000049">16 attempts to replicate the result failed</a>, definitively demonstrating that the original claim was erroneous. Plotted together on a graph, the studies formed a perfect bell curve centered around zero effect. As is frequently the case with failures to replicate, of the 17 total attempts, the original had both the smallest sample size and the most extreme result.</p>
<p>The Reproducibility Project, a collaboration of 270 psychologists, has <a href="https://osf.io/ezcuj/">attempted to replicate 100 psychology studies</a>, while <a href="https://doi.org/10.1038/s41562-018-0399-z">a 2018 report</a> examined studies published in the prestigious scholarly journals Nature and Science between 2010 and 2015. These efforts find that about two-thirds of studies do replicate to some degree, but that the strength of the findings is often weaker than originally claimed. </p>
<p>Is this bad for science? It’s certainly uncomfortable for many scientists whose work gets undercut, and the rate of failures may currently be unacceptably high. But, as a psychologist and a statistician, I believe confronting the replication crisis is good for science as a whole.</p>
<h2>Practicing good science</h2>
<p>First, these replication attempts are examples of good science operating as it should. They are focused applications of the scientific method, careful experimentation and observation in the pursuit of reproducible results. </p>
<p>Many people incorrectly assume that, due to the “p<.05” threshold for statistical significance, only 5% of discoveries will prove to be errors. However, 15 years ago, physician John Ioannidis pointed to some fallacies in that assumption, arguing that false discoveries <a href="https://doi.org/10.1371/journal.pmed.0020124">made up the majority of the published literature</a>. Replication efforts are confirming that the false discovery rate is much higher than 5%. </p>
<p>Awareness about the replication crisis appears to be promoting better behavior among scientists. Twenty years ago, the cycle for publication was basically complete after a scientist convinced three reviewers and an editor that the work was sound. Yes, the published research would become part of the literature, and therefore open to review – but that was a slow-moving process. </p>
<p>Today, the stakes have been raised for researchers. They know that there’s the possibility that their study might be reviewed by thousands of opinionated commenters on the internet or by a high-profile group like the Reproducibility Project. Some journals now require scientists to make their data and computer code available, which makes it likelier that others will catch errors in their work. What’s more, some scientists can now “preregister” their hypotheses before starting their study – the equivalent of calling your shot before you take it.</p>
<p>Combined with open sharing of materials and data, preregistration improves the transparency and reproducibility of science, hopefully ensuring that a smaller fraction of future studies will fail to replicate. </p>
<p>While there are signs <a href="https://fivethirtyeight.com/features/psychologys-replication-crisis-has-made-the-field-better/">that scientists are indeed reforming their ways</a>, there is still a long way to go. Out of the 1,500 accepted presentations <a href="https://plan.core-apps.com/sbm_annual2019">at the annual meeting for the Society for Behavioral Medicine in March</a>, only 1 in 4 of the authors reported using these open science techniques in the work they presented. </p>
<h2>Improving statistical intuition</h2>
<p>Finally, the replication crisis is helping improve scientists’ intuitions about statistical inference. </p>
<p>Researchers now better understand how weak designs with high uncertainty – in combination with choosing to publish only when results are statistically significant – produce exaggerated results. In fact, it is one of the reasons more than 800 scientists recently argued in favor of abandoning <a href="https://www.nature.com/articles/d41586-019-00857-9">statistical significance testing</a>.</p>
<p>We also better appreciate how isolated research findings fit into the broader pattern of results. In another study, Ionnadis and oncologist Jonathan Schoenfeld <a href="https://doi.org/10.3945/ajcn.112.047142">surveyed the epidemiology literature</a> for studies associating 40 common food ingredients with cancer. There were some broad consistent trends – unsurprisingly, bacon, salt and sugar are never found to be protective against cancer. </p>
<p>But plotting the effects from 264 studies produced a confusing pattern. The magnitudes of the reported effects were highly variable. In other words, one study might say that a given ingredient was very bad for you, while another might conclude that the harms were small. In many cases, the studies even disagreed on whether a given ingredient was harmful or beneficial. </p>
<p>Each of the studies had at some point been reported in isolation in a newspaper or a website as the latest finding in health and nutrition. But taken as a whole, the evidence from all the studies was not nearly as definitive as each single study may have appeared.</p>
<p>Schoenfeld and Ioannidis also graphed the 264 published effect sizes. Unlike the fuzzy font replications, their graph of published effects looked like the tails of a bell curve. It was centered at zero with all the nonsignificant findings carved out. The unmistakable impression from seeing all the published nutrition results presented at once is that many of them might be like the fuzzy font result – impressive in isolation, but anomalous under replication. </p>
<p>The breathtaking possibility that a large fraction of published research findings might just be serendipitous is exactly why people speak of the replication crisis. But it’s not really a scientific crisis, because the awareness is bringing improvements in research practice, new understandings about statistical inference and an appreciation that isolated findings must be interpreted as part of a larger pattern.</p>
<p>Rather than undermining science, I feel that this is reaffirming the best practices of the scientific method.</p><img src="https://counter.theconversation.com/content/103736/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Eric Loken does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Rising evidence shows that many psychology studies don’t stand up to added scrutiny. The problem has many scientists worried – but it could also encourage them to up their game.Eric Loken, Assistant Professor of Educational Psychology, University of ConnecticutLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1141612019-04-01T10:40:12Z2019-04-01T10:40:12ZIs it the end of ‘statistical significance’? The battle to make science more uncertain<figure><img src="https://images.theconversation.com/files/266165/original/file-20190327-139371-15nimd0.jpg?ixlib=rb-1.1.0&rect=3%2C0%2C2625%2C1552&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Some scientists think it's time to hang up statistical significance.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/white-robe-hanging-on-door-laboratory-321548114">mariakraynova/Shutterstock.com</a></span></figcaption></figure><p>The scientific world is abuzz following recommendations by two of the most prestigious scholarly journals – <a href="https://www.tandfonline.com/doi/full/10.1080/00031305.2019.1583913">The American Statistician</a> and <a href="https://www.nature.com/articles/d41586-019-00857-9">Nature</a> – that the term “<a href="https://en.wikipedia.org/wiki/Statistical_significance">statistical significance</a>” be retired.</p>
<p>In their introduction to the special issue of The American Statistician on the topic, the journal’s editors urge “moving to a world beyond ‘p<0.05,’” <a href="http://www.haghish.com/resources/materials/Statistical_Methods_for_Research_Workers.pdf">the famous 5 percent threshold</a> for determining whether a study’s result is statistically significant. If a study passes this test, it means that the probability of a result being due to chance alone is less than 5 percent. This has often been understood to mean that the study is worth paying attention to.</p>
<p>The journal’s basic message – but not necessarily the consensus of the 43 articles in this issue, one of which <a href="https://doi.org/10.1080/00031305.2018.1518788">I contributed</a> – was that scientists first and foremost should “embrace uncertainty” and “be thoughtful, open and modest.”</p>
<p>While these are fine qualities, I believe that scientists must not let them obscure the precision and rigor that science demands. Uncertainty is inherent in data. If scientists further weaken the already very weak threshold of 0.05, then that would inevitably make scientific findings more difficult to interpret and less likely to be trusted. </p>
<h2>Piling difficulty on top of difficulty</h2>
<p>In the traditional practice of science, a scientist generates a hypothesis and designs experiments to collect data in support of hypotheses. He or she then collects data and performs statistical analyses to determine if the data did in fact support the hypothesis. </p>
<p>One standard statistical analysis is the <a href="https://en.wikipedia.org/wiki/P-value">p-value</a>. This generates a number between 0 and 1 that indicates strong, marginal or weak support of a hypothesis.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=396&fit=crop&dpr=1 600w, https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=396&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=396&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=498&fit=crop&dpr=1 754w, https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=498&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/266547/original/file-20190329-70999-ktz14q.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=498&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A quick guide to p-values.</span>
<span class="attribution"><span class="source">Repapetilto/Wikimedia</span>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>But I worry that abandoning evidence-driven standards for these judgments will make it even more difficult to design experiments, much less assess their outcomes. For instance, how could one even determine an appropriate sample size without a targeted level of precision? And how are research results to be interpreted? </p>
<p>These are important questions, not just for researchers at funding or regulatory agencies, but for anyone whose daily life is influenced by statistical judgments. That includes anyone who takes medicine or undergoes surgery, drives or rides in vehicles, is invested in the stock market, has life insurance or depends on accurate weather forecasts… and the list goes on. Similarly, many regulatory agencies rely on statistics to make decisions every day.</p>
<p>Scientists must have the language to indicate that a study, or group of studies, provided significant evidence in favor of a relationship or an effect. Statistical significance is the term that serves this purpose.</p>
<h2>The groups behind this movement</h2>
<p>Hostility to the term “statistical significance” arises from two groups.</p>
<p>The first is largely made up of scientists disappointed when their studies produce p=0.06. In other words, those whose studies just don’t make the cut. These are largely <a href="https://www.researchgate.net/publication/319880949_Justify_your_alpha">scientists who find the 0.05 standard too high</a> a hurdle for getting published in the scholarly journals that are a major source of academic knowledge – as well as tenure and promotion. </p>
<p>The second group is concerned over the <a href="https://theconversation.com/a-statistical-fix-for-the-replication-crisis-in-science-84896">failure to replicate scientific studies</a>, and they blame significance testing in part for this failure.</p>
<p>For example, <a href="https://doi.org/10.1126/science.aac4716">a group of scientists</a> recently repeated 100 published psychology experiments. Ninety-seven of the 100 original studies reported a statistically significant finding (p<0.05), but only 36 of the repeated experiments were able to also achieving a significant result. </p>
<p>The failure of so many studies to replicate can be partially blamed on publication bias, which results when only significant findings are published. Publication bias causes scientists to overestimate the <a href="https://doi.org/10.4065/75.12.1284">magnitude of an effect</a>, such as the relationship between two variables, making replication less likely.</p>
<p>Complicating the situation even further is the fact that <a href="https://doi.org/10.1073/pnas.1313476110">recent research</a> shows that the p-value cutoff doesn’t provide much evidence that a real relationship has been found. In fact, in replication studies in social sciences, it now appears that p-values close to the standard threshold of 0.05 probably mean that a scientific claim is wrong. It’s only when the p-value is much smaller, maybe less than 0.005, that scientific claims are likely to <a href="https://www.nature.com/articles/s41562-017-0189-z">show a real relationship</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/266166/original/file-20190327-139345-fw20gp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What do the data really say?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/african-american-businessman-using-devices-business-1017688327">fizkes/shutterstock.com</a></span>
</figcaption>
</figure>
<h2>The confusion leading to this movement</h2>
<p>Many nonstatisticians confuse p-value with the <a href="https://en.wikipedia.org/wiki/Probability">probability</a> that no discovery was made.</p>
<p>Let’s look at an example from the Nature article. <a href="https://doi.org/10.1016/j.ijcard.2014.09.205">Two studies</a> examined the increased risk of disease after taking a drug. Both studies estimated that patients had a 20 percent higher risk of getting the disease if they take the drug than if they didn’t. In other words, both studies estimated the <a href="https://en.wikipedia.org/wiki/Risk_ratio">relative risk</a> to be 1.20. </p>
<p>However, the relative risk estimated from one study was more precise than the other, because its estimate was based on outcomes from many more patients. Thus, the estimate from one study was statistically significant, and the estimate from the other was not.</p>
<p>The authors cite this inconsistency – that one study obtained a significant result and the other didn’t – as evidence that statistical significance leads to misinterpretation of scientific results. </p>
<p>However, I feel that a reasonable summary is simply that one study collected statistically significant evidence and one did not, but the estimates from both studies suggested that relative risk was near 1.2.</p>
<h2>Where to go from here</h2>
<p>I agree with the Nature article and The American Statistician editorial that data collected from all well-designed scientific studies should be made publicly available, with comprehensive summaries of statistical analyses. Along with each study’s p-values, it is important to publish estimates of effect sizes and confidence intervals for these estimates, as well as complete descriptions of all data analyses and data processing. </p>
<p>On the other hand, only studies that provide strong evidence in favor of important associations or new effects should be published in premier journals. For these journals, standards of evidence should be increased by requiring smaller p-values for the initial report of relationships and new discoveries. In other words, make scientists publish results that they’re even more certain about.</p>
<p>The bottom line is that dismantling accepted standards of statistical evidence will decrease the uncertainty that scientists have in publishing their own research. But it will also increase the public’s uncertainty in accepting the findings that they do publish – and that can be problematic.</p><img src="https://counter.theconversation.com/content/114161/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Valen E. Johnson receives funding from the National Institutes of Health to perform biostatistical research on the selection of variables associated with cancer and cancer research. </span></em></p>Two prestigious journals have suggested abandoning the traditional test of the strength of a study’s results. But a statistician worries that this would make science worse.Valen E. Johnson, University Distinguished Professor and Department Head of Statistics, Texas A&M UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1139982019-03-22T18:19:31Z2019-03-22T18:19:31ZDoes Monsanto’s Roundup cause cancer? The law says yes, the science says maybe<figure><img src="https://images.theconversation.com/files/265333/original/file-20190322-36276-hnz03n.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Law and science seek proof in similar ways, but at very different speeds</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/old-book-library-stethoscope-on-open-763907971">Chinnapong/Shutterstock</a></span></figcaption></figure><p>A federal jury in California has unanimously decided that the weedkiller Roundup was <a href="https://www.nytimes.com/2019/03/19/business/monsanto-roundup-cancer.html">a “substantial factor</a>” in causing the lymphoma of 70-year-old Edwin Hardeman, who had used Roundup on his property for many years, and <a href="https://www.npr.org/2019/03/27/707439575/jury-awards-80-million-in-damages-in-roundup-weed-killer-cancer-trial">awarded Hardeman US$80 million in damages</a>. This is the second such verdict in less than eight months. In August 2018 another jury concluded that groundskeeper DeWayne Johnson <a href="https://theconversation.com/jury-finds-monsanto-liable-in-the-first-roundup-cancer-trial-heres-what-could-happen-next-101433">developed cancer due to his exposure to Roundup</a>, and ordered Monsanto, the manufacturer, to pay Johnson nearly US$300 million in damages.</p>
<p>In product liability cases like these, plaintiffs must prove that the product <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3006278">was the “specific cause” of the harm done</a>. The law sets a very high bar, which may be unrealistic for harms such as a diagnosis of cancer. Nonetheless, two juries have now ruled against Roundup. </p>
<p>Monsanto’s lawyers insist that <a href="https://monsanto.com/news-stories/statements/roundup-glyphosate-dewayne-johnson-trial/">Roundup is safe</a> and that the plaintiffs’ arguments in both cases were <a href="https://www.sfchronicle.com/bayarea/article/Monsanto-s-Roundup-found-by-jury-to-be-likely-13701218.php">scientifically flawed</a>. But jurors believed that they were shown enough evidence to meet the legal criteria for finding Roundup was the “specific cause” of cancer in both men.</p>
<p>As a result of these high-profile trials, Los Angeles County has <a href="https://www.nbclosangeles.com/news/local/LA-County-Halts-Use-of-Popular-Weed-Killer-on-County-Property-507399471.html">halted use of Roundup</a> by all of its departments until clearer evidence is available about its potential health and environmental effects.</p>
<p>Although “proof” has a similar primary meaning in science and law – a consensus of experts – how it is achieved is often quite different. Most importantly, in science there is no deadline for a discovery, whereas in law, timeliness is paramount. The conundrum is that a legal decision may be required for a potentially dangerous product on the market before the science has been settled.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=405&fit=crop&dpr=1 600w, https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=405&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=405&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=509&fit=crop&dpr=1 754w, https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=509&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/265334/original/file-20190322-36256-e7ymew.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=509&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">DeWayne Johnson hugs one of his lawyers after hearing the verdict in his case against Monsanto in San Francisco on Aug. 10, 2018.</span>
<span class="attribution"><a class="source" href="http://www.apimages.com/metadata/Index/Roundup-Weed-Killer-Cancer/0bad759c98b644da9ff243d435347ddf/30/0">Josh Edelson/Pool Photo via AP</a></span>
</figcaption>
</figure>
<h2>What is ‘proof’?</h2>
<p>Proof is an elusive concept. Do we need proof that our glimpse of stripes in the jungle is a tiger before we run? Do we need proof that the jet engines are reliable before clearing a plane to take off for London with 300 passengers on board? </p>
<p>Can proof ever be absolute, or is it inherently a statement of probabilities?</p>
<p>Scientists use proof to advance our understanding of nature. Science assumes that there is an objective reality underlying all of nature, which we can eventually understand. Nature has no moral compass: It is neither good nor bad – it simply is. Scientists are human, so they experience joy or disappointment depending on the outcome of an experiment, but those emotions do not alter the truths of nature.</p>
<p>In contrast, lawyers use proof to find justice for people. Law is built on the premise that there are widely accepted codes of human behavior, which should be rectified when they are violated. Ideally, justice under the law is a highly moral endeavor with fairness at its core.</p>
<h2>Proof in science</h2>
<p>Scientists vigorously argue about whether an experiment proves a new detail in the vast tapestry of nature. Most scientists require that a new experimental finding is reproducible, statistically significant and plausible within the context of experiments that came before it. </p>
<p>But often conventional wisdom, based on what had been proven in the past, is wrong. </p>
<p>For example, until the 1980s medical wisdom said the cause of stomach ulcers was too much acid secretion. Therefore, young doctors learned in medical school to treat ulcers with antacids, milk and a bland diet. Then in 1983 a couple of troublemaking Australians named Robin Warren and Barry Marshall suggested that <a href="https://doi.org/10.1016/S0140-6736(83)92719-8">a bacterium actually caused ulcers</a>.</p>
<p>Of course, this was not believed to be possible because no bacterium could survive in the highly acidic environment of the stomach. Marshall and Warren were widely ridiculed after their article appeared, and <a href="https://www.csicop.org/si/show/bacteria_ulcers_and_ostracism_h._pylori_and_the_making_of_a_myth">heckled at conferences</a> where they presented the idea. However, other scientists became interested and started to investigate the alternative theory. </p>
<p>New evidence accumulated over the next decade and ultimately proved that Marshall and Warren were right. They received the <a href="https://www.nobelprize.org/prizes/medicine/2005/press-release/">Nobel Prize in Medicine</a> in 2005. Today the bacterium, <em>H. pylori</em>, is believed not only to cause ulcers but also <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/ijc.28999">most stomach cancers worldwide</a>.</p>
<h2>Proof in law</h2>
<p>To reveal the facts of a legal dispute, lawyers engage in adversarial argument. Attorneys for each side argue from their client’s perspective, without claiming to be objective. In an ideal world, with diligent and honest attorneys on both sides, justice should prevail. Often, however, a case is not ideal. </p>
<p>In some product liability lawsuits it can be perfectly clear that a faulty product, such as the <a href="https://www.nytimes.com/2017/02/27/business/takata-airbags-automakers-class-action.html">rupture-prone Takata airbags</a> that car manufacturers were forced to recall several years ago, caused a plaintiff’s injury. However, as I wrote in connection with the <a href="https://theconversation.com/does-monsantos-roundup-cause-cancer-trial-highlights-the-difficulty-of-proving-a-link-100875">first Roundup lawsuit</a>, this is close to impossible to prove in cancer cases.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/BnU3sidMlls?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Product liability is the area of law in which consumers can bring claims against manufacturers and sellers for products that injure people.</span></figcaption>
</figure>
<p>DeWayne Johnson’s lawsuit against Monsanto turned on a 2015 scientific assessment from the International Agency for Research on Cancer, an agency of the World Health Organization, classifying glyphosate – the active ingredient in Roundup – as a “2A: probable human carcinogen.” However, this finding does not mean that Roundup “probably” caused Johnson’s lymphoma. </p>
<p>The European Food Safety Authority, an equally authoritative deliberative body, also assessed glyphosate, concluding that it was <a href="https://doi.org/10.1007/s00204-017-1962-5">unlikely to pose a cancer risk</a> and actual exposure levels did not represent a public health concern. This study considered much of the same evidence as the International Agency for Research on Cancer, but interpreted it differently.</p>
<p>Nonetheless, the jury concluded that Roundup had caused Johnson’s cancer and awarded $289 million in damages, which was reduced to $80 million on appeal. Clearly, in their view, there was sufficient “proof” for the case against Roundup. </p>
<h2>Different kinds of expertise</h2>
<p>In science, proof can only be defined as a <a href="https://theconversation.com/does-monsantos-roundup-cause-cancer-trial-highlights-the-difficulty-of-proving-a-link-100875">consensus of experts</a> who agree that the facts overwhelmingly support a specific conclusion. In law the jury plays that role, with jurors expected to become experts in the case. </p>
<p>This means, of course, that what has been proven in science or in law can be unproven with new evidence or new experts. </p>
<p>Many big questions in physics, geology and biology have taken centuries to answer, and scientists constantly re-evaluate those answers in light of new evidence. For example, in the 1930s physicists widely agreed that there were three fundamental particles: electrons, protons and neutrons. Today the <a href="https://home.cern/science/physics/standard-model">standard model of physics</a> holds that there are at least a dozen elementary particles, with many others hypothesized but not yet proven to exist.</p>
<p>Legal judgments have much more immediate impacts – sometimes life or death. Justice delayed is justice denied, and jurors must agree on a final proof to deliver a verdict. But as history has painfully taught us, a rush to judgment can yield the opposite of equity. Glyphosate <a href="https://www.glyphosate.eu/benefits">provides many benefits</a>, which must be weighed against the potential for harm.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/fMlN--8FhTY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Bayer, Monsanto’s parent company, faces potentially enormous liability from thousands of lawsuits claiming Roundup gave plaintiffs cancer.</span></figcaption>
</figure>
<p>So, what is a juror in the next Roundup trial to do? As I have <a href="https://theconversation.com/does-monsantos-roundup-cause-cancer-trial-highlights-the-difficulty-of-proving-a-link-100875">argued previously</a>, “specific causation” for cancer can almost never be proved. </p>
<p>However, that does not mean that a plaintiff has no case. If the formal standard in law were changed to “<a href="https://www.cdc.gov/niosh/ocas/faqspoc.html">probability of causation</a>” as used by the Centers for Disease Control for occupational cancers, then a jury could find a product guilty of substantially increasing the risk, and make an award for the plaintiff, potentially a large one. In my view, if this were the standard, future rulings like the two we have already seen would align law and science on this issue more closely.</p>
<p><em>This story has been updated to include the $80 million damage award in the second Roundup lawsuit.</em></p><img src="https://counter.theconversation.com/content/113998/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard G. "Bugs" Stevens does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>What is proof? In both law and science, it’s basically a consensus of experts – but they work at very different speeds. That means juries may reach verdicts on an issue before the science is settled.Richard G. "Bugs" Stevens, Professor, School of Medicine, University of ConnecticutLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1080022018-12-21T11:42:33Z2018-12-21T11:42:33ZYes, there is a war between science and religion<figure><img src="https://images.theconversation.com/files/251794/original/file-20181220-103649-1i46rvm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Doubting Thomas needed the proof, just like a scientist, and now is a cautionary Biblical example.</span> <span class="attribution"><a class="source" href="https://commons.wikimedia.org/wiki/File:The_Incredulity_of_Saint_Thomas-Caravaggio_(1601-2).jpg">Caravaggio/Wikimedia Commons</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>As the West becomes <a href="https://www.cambridge.org/us/academic/subjects/politics-international-relations/comparative-politics/sacred-and-secular-religion-and-politics-worldwide-2nd-edition?format=PB">more and more secular</a>, and the discoveries of evolutionary biology and cosmology shrink the boundaries of faith, the claims that science and religion are compatible grow louder. If you’re a believer who doesn’t want to seem anti-science, what can you do? You must argue that your faith – or any faith – is perfectly compatible with science.</p>
<p>And so one sees claim after claim from <a href="https://theconversation.com/war-between-science-and-religion-is-far-from-inevitable-106477">believers</a>, <a href="https://www.abc.net.au/news/science/2018-05-24/three-scientists-talk-about-how-their-faith-fits-with-their-work/9543772#lightbox-content-lightbox-39">religious scientists</a>, <a href="http://www.nas.edu/evolution/Compatibility.html">prestigious science organizations</a> and <a href="https://www.cambridge.org/us/academic/subjects/philosophy/philosophy-science/can-darwinian-be-christian-relationship-between-science-and-religion?format=PB">even atheists</a> asserting not only that science and religion are compatible, but also that they can actually help each other. This claim is called “<a href="https://theconversation.com/against-accommodationism-how-science-undermines-religion-52660">accommodationism</a>.”</p>
<p>But I argue that this is misguided: that science and religion are not only in conflict – even at “war” – but also represent incompatible ways of viewing the world.</p>
<h2>Opposing methods for discerning truth</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=840&fit=crop&dpr=1 600w, https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=840&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=840&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1056&fit=crop&dpr=1 754w, https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1056&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/251791/original/file-20181220-103649-c60gib.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1056&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The scientific method relies on observing, testing and replication to learn about the world.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/7wWRXewYCH4">Jaron Nix/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>My argument runs like this. I’ll construe “science” as the set of tools we use to find truth about the universe, with the understanding that these truths are provisional rather than absolute. These tools include observing nature, framing and testing hypotheses, trying your hardest to prove that your hypothesis is wrong to test your confidence that it’s right, doing experiments and above all replicating your and others’ results to increase confidence in your inference.</p>
<p>And I’ll define religion <a href="https://www.penguinrandomhouse.com/books/294244/breaking-the-spell-by-daniel-c-dennett/9780143038337">as does philosopher Daniel Dennett</a>: “Social systems whose participants avow belief in a supernatural agent or agents whose approval is to be sought.” Of course many religions don’t fit that definition, but the ones whose compatibility with science is touted most often – the Abrahamic faiths of Judaism, Christianity and Islam – fill the bill.</p>
<p>Next, realize that both religion and science rest on “truth statements” about the universe – claims about reality. The edifice of religion differs from science by additionally dealing with morality, purpose and meaning, but even those areas rest on a foundation of empirical claims. You can hardly call yourself a Christian if you don’t believe in the Resurrection of Christ, a Muslim if you don’t believe the angel Gabriel dictated the Qur’an to Muhammad, or a Mormon if you don’t believe that the angel Moroni showed Joseph Smith the golden plates that became the Book of Mormon. After all, why accept a faith’s authoritative teachings if you reject its truth claims?</p>
<p>Indeed, <a href="https://www.biblegateway.com/passage/?search=1+Corinthians+15&version=KJV">even the Bible</a> notes this: “But if there be no resurrection of the dead, then is Christ not risen: And if Christ be not risen, then is our preaching vain, and your faith is also vain.”</p>
<p>Many theologians emphasize religion’s empirical foundations, agreeing with the physicist and Anglican priest <a href="https://yalebooks.yale.edu/book/9780300188110/science-and-religion-quest-truth">John Polkinghorne</a>:</p>
<blockquote>
<p>“The question of truth is as central to [religion’s] concern as it is in science. Religious belief can guide one in life or strengthen one at the approach of death, but unless it is actually true it can do neither of these things and so would amount to no more than an illusory exercise in comforting fantasy.”</p>
</blockquote>
<p>The conflict between science and faith, then, rests on the methods they use to decide what is true, and what truths result: These are conflicts of both methodology and outcome.</p>
<p>In contrast to the methods of science, religion adjudicates truth not empirically, but via dogma, scripture and authority – in other words, through faith, <a href="https://www.biblegateway.com/passage/?search=Hebrews+11%3A1&version=KJV">defined in Hebrews 11</a> as “the substance of things hoped for, the evidence of things not seen.” In science, faith without evidence is a vice, while in religion it’s a virtue. Recall <a href="https://www.biblegateway.com/passage/?search=John+20:29&version=KJV">what Jesus said</a> to “doubting Thomas,” who insisted in poking his fingers into the resurrected Savior’s wounds: “Thomas, because thou hast seen me, thou hast believed: blessed are they that have not seen, and yet have believed.”</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&rect=156%2C0%2C3597%2C2661&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&rect=156%2C0%2C3597%2C2661&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/251789/original/file-20181220-103657-jmz86k.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Two ways to look at the same thing, never the twain shall meet.</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/yJr1rbbrAGw">Gabriel Lamza/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>And yet, without supporting evidence, <a href="https://theharrispoll.com/new-york-n-y-december-16-2013-a-new-harris-poll-finds-that-while-a-strong-majority-74-of-u-s-adults-do-believe-in-god-this-belief-is-in-decline-when-compared-to-previous-years-as-just-over/">Americans believe a number of religious claims</a>: 74 percent of us believe in God, 68 percent in the divinity of Jesus, 68 percent in Heaven, 57 percent in the virgin birth, and 58 percent in the Devil and Hell. Why do they think these are true? Faith.</p>
<p>But different religions make different – and often conflicting – claims, and there’s no way to judge which claims are right. There are <a href="http://www.adherents.com/">over 4,000 religions on this planet</a>, and their “truths” are quite different. (Muslims and Jews, for instance, absolutely reject the Christian belief that Jesus was the son of God.) Indeed, new sects often arise when some believers reject what others see as true. <a href="https://doi.org/10.1007/s12052-010-0221-5">Lutherans split over the truth of evolution</a>, while Unitarians rejected other Protestants’ belief <a href="http://www.bbc.co.uk/religion/religions/unitarianism/beliefs/god.shtml">that Jesus was part of God</a>.</p>
<p>And while science has had success after success in understanding the universe, the “method” of using faith has led to no proof of the divine. How many gods are there? What are their natures and moral creeds? Is there an afterlife? Why is there moral and physical evil? There is no one answer to any of these questions. All is mystery, for all rests on faith.</p>
<p>The “war” between science and religion, then, is a conflict about whether you have good reasons for believing what you do: whether you see faith as a vice or a virtue.</p>
<h2>Compartmentalizing realms is irrational</h2>
<p>So how do the faithful reconcile science and religion? Often they point to the existence of religious scientists, like <a href="https://www.simonandschuster.com/books/The-Language-of-God/Francis-S-Collins/9781416542742">NIH Director Francis Collins</a>, or to the many religious people who accept science. But I’d argue that this is compartmentalization, not compatibility, for how can you reject the divine in your laboratory but accept that the wine you sip on Sunday is the blood of Jesus?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/251790/original/file-20181220-103657-15c6yz3.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Can divinity be at play in one setting but not another?</span>
<span class="attribution"><a class="source" href="https://unsplash.com/photos/21xmyDjZPck">Jametlene Reskp/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Others argue that <a href="https://theconversation.com/jesuits-as-science-missionaries-for-the-catholic-church-47829">in the past religion promoted science</a> and inspired questions about the universe. But in the past every Westerner was religious, and it’s debatable whether, in the long run, the progress of science has been promoted by religion. Certainly evolutionary biology, <a href="https://scholar.google.com/citations?user=iME3jMYAAAAJ&hl=en&oi=ao">my own field</a>, has been <a href="https://doi.org/10.1111/j.1558-5646.2012.01664.x">held back strongly by creationism</a>, which arises solely from religion.</p>
<p>What is not disputable is that today science is practiced as an atheistic discipline – and largely by atheists. There’s <a href="https://global.oup.com/academic/product/religion-vs-science-9780190650629">a huge disparity in religiosity</a> between American scientists and Americans as a whole: 64 percent of our elite scientists are atheists or agnostics, compared to only 6 percent of the general population – more than a tenfold difference. Whether this reflects differential attraction of nonbelievers to science or science eroding belief – I suspect both factors operate – the figures are prima facie evidence for a science-religion conflict.</p>
<p>The most common accommodationist argument is <a href="http://www.randomhousebooks.com/books/70014/">Stephen Jay Gould’s thesis</a> of “non-overlapping magisteria.” Religion and science, he argued, don’t conflict because: “Science tries to document the factual character of the natural world, and to develop theories that coordinate and explain these facts. Religion, on the other hand, operates in the equally important, but utterly different, realm of human purposes, meanings and values – subjects that the factual domain of science might illuminate, but can never resolve.”</p>
<p>This fails on both ends. First, religion certainly makes claims about “the factual character of the universe.” In fact, the biggest opponents of non-overlapping magisteria are believers and theologians, many of whom reject the idea that Abrahamic religions are “<a href="http://monopolizingknowledge.net/contents.html">empty of any claims to historical or scientific facts</a>.”</p>
<p>Nor is religion the sole bailiwick of “purposes, meanings and values,” which of course differ among faiths. There’s a long and distinguished history of philosophy and ethics – extending from Plato, Hume and Kant up to Peter Singer, Derek Parfit and <a href="https://scholar.google.com/citations?user=clG4xHAAAAAJ&hl=en&oi=ao">John Rawls</a> in our day – that relies on <a href="https://theconversation.com/are-religious-people-more-moral-84560">reason rather than faith</a> as a fount of morality. All serious ethical philosophy is secular ethical philosophy.</p>
<p>In the end, it’s irrational to decide what’s true in your daily life using empirical evidence, but then rely on wishful-thinking and ancient superstitions to judge the “truths” undergirding your faith. This leads to a mind (no matter how scientifically renowned) at war with itself, producing the cognitive dissonance that prompts accommodationism. If you decide to have good reasons for holding any beliefs, then you must choose between faith and reason. And as facts become increasingly important for the welfare of our species and our planet, people should see faith for what it is: not a virtue but a defect.</p><img src="https://counter.theconversation.com/content/108002/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jerry Coyne does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>An evolutionary biologist makes the case that there’s no reconciling science and religion. In the search for truth, one tests hypotheses while the other relies on faith.Jerry Coyne, Professor Emeritus of Ecology and Evolution, University of ChicagoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1028352018-12-13T11:44:26Z2018-12-13T11:44:26ZHow big data has created a big crisis in science<figure><img src="https://images.theconversation.com/files/242069/original/file-20181024-71014-1f3d40t.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Scientists are facing a reproducibility crisis.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/recycle-crumpled-paper-trash-can-242668294?src=CuOH22LQ8w3J-cEHfkGeXA-1-62">Y Photo Studio/shutterstock.com</a></span></figcaption></figure><p>There’s an increasing concern among scholars that, in many areas of science, famous published results tend to be impossible to reproduce. </p>
<p>This crisis can be severe. For example, in 2011, <a href="https://doi.org/10.1038/nrd3439-c1">Bayer HealthCare reviewed 67 in-house projects</a> and found that they could replicate less than 25 percent. Furthermore, over two-thirds of the projects had major inconsistencies. <a href="https://www.theatlantic.com/science/archive/2018/11/psychologys-replication-crisis-real/576223/">More recently</a>, in November, an investigation of 28 major psychology papers found that only half could be replicated.</p>
<p>Similar findings are reported across other fields, including <a href="https://doi.org/10.1371/journal.pmed.0020124">medicine</a> and <a href="https://doi.org/10.1111/ecoj.12461">economics</a>. These striking results put the credibility of all scientists in deep trouble.</p>
<p>What is causing this big problem? There are many contributing factors. As a statistician, I see huge issues with the way science is done in the era of big data. The reproducibility crisis is driven in part by invalid statistical analyses that are from data-driven hypotheses – the opposite of how things are traditionally done.</p>
<h2>Scientific method</h2>
<p>In a classical experiment, the statistician and scientist first together frame a hypothesis. Then scientists conduct experiments to collect data, which are subsequently analyzed by statisticians. </p>
<p>A famous example of this process is the <a href="https://io9.gizmodo.com/how-a-tea-party-turned-into-a-scientific-legend-1706697488">“lady tasting tea” story.</a> Back in the 1920s, at a party of academics, a woman claimed to be able to tell the difference in flavor if the tea or milk was added first in a cup. Statistician Ronald Fisher doubted that she had any such talent. He hypothesized that, out of eight cups of tea, prepared such that four cups had milk added first and the other four cups had tea added first, the number of correct guesses would follow a probability model called the <a href="http://mathworld.wolfram.com/HypergeometricDistribution.html">hypergeometric distribution</a>.</p>
<p>Such an experiment was done with eight cups of tea sent to the lady in a random order – and, according to legend, she categorized all eight correctly. This was strong evidence against Fisher’s hypothesis. The chances that the lady had achieved all correct answers through random guessing was an extremely low 1.4 percent. </p>
<p>That process – hypothesize, then gather data, then analyze – is rare in the big data era. Today’s technology can collect <a href="ftp://public.dhe.ibm.com/software/data/sw-library/bda/zone/index.html">huge amounts of data</a>, on the order of 2.5 exabytes a day. </p>
<p>While this is a good thing, science often develops at a much slower speed, and so researchers may not know how to dictate the right hypothesis in the analysis of data. For example, scientists can now collect tens of thousands of gene expressions from people, but it is very hard to decide whether one should include or exclude a particular gene in the hypothesis. In this case, it is appealing to form the hypothesis based on the data. While such hypotheses may appear compelling, conventional inferences from these hypotheses are generally invalid. This is because, in contrast to the “lady tasting tea” process, the order of building the hypothesis and seeing the data has reversed.</p>
<h2>Data problems</h2>
<p>Why can this reversion cause a big problem? Let’s consider a big data version of the tea lady — a “100 ladies tasting tea” example.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/246301/original/file-20181119-76137-hbf5bc.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What’s in the cup?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/drink-tea-relax-cosy-photo-blurred-529505785">Alex Yuzhakov/shutterstock.com</a></span>
</figcaption>
</figure>
<p>Suppose there are 100 ladies who cannot tell the difference between the tea, but take a guess after tasting all eight cups. There’s actually a 75.6 percent chance that at least one lady would luckily guess all of the orders correctly. </p>
<p>Now, if a scientist saw some lady with a surprising outcome of all correct cups and ran a statistical analysis for her with the same hypergeometric distribution above, then he might conclude that this lady had the ability to tell the difference between each cup. But this result isn’t reproducible. If the same lady did the experiment again she would very likely sort the cups wrongly – not getting as lucky as her first time – since she couldn’t really tell the difference between them. </p>
<p>This small example illustrates how scientists can “luckily” see interesting but spurious signals from a dataset. They may formulate hypotheses after these signals, then use the same dataset to draw the conclusions, claiming these signals are real. It may be a while before they discover that their conclusions are not reproducible. This problem is <a href="https://doi.org/10.1109/TIT.2017.2700202">particularly common in big data analysis</a> due to the large size of data, just by chance some spurious signals may “luckily” occur. </p>
<p>What’s worse, this process may allow scientists <a href="https://doi.org/10.1371/journal.pone.0005738">to manipulate the data</a> to produce the most publishable result. <a href="http://books.wwnorton.com/books/How-to-Lie-with-Statistics/">Statisticians joke</a> about such a practice: “If we torture data hard enough, they will tell you something.” However, is this “something” valid and reproducible? Probably not.</p>
<h2>Stronger analyses</h2>
<p>How can scientists avoid the above problem and achieve reproducible results in big data analysis? The answer is simple: Be more careful. </p>
<p>If scientists want reproducible results from data-driven hypotheses, then they need to carefully take the data-driven process into account in the analysis. Statisticians need to design new procedures that provide valid inferences. There are <a href="https://projecteuclid.org/euclid.aos/1369836961">a few already underway</a>.</p>
<p>Statistics is about the optimal way to extract information from data. By this nature, it is a field that evolves with the evolution of data. The problems of the big data era are just one example of such evolution. I think that scientists should embrace these changes, as they will lead to opportunities to develop of novel statistical techniques, which will in turn provide valid and interesting scientific discoveries.</p><img src="https://counter.theconversation.com/content/102835/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kai Zhang's research is partially supported by funding from the National Science Foundation DMS-1613112 and IIS-1633212. </span></em></p>Science is in a reproducibility crisis. This is driven in part by invalid statistical analyses that happen long after the data are collected – the opposite of how things are traditionally done.Kai Zhang, Associate Professor of Statistics and Operations Research, University of North Carolina at Chapel HillLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1060412018-11-19T11:36:51Z2018-11-19T11:36:51ZThe equivalence test: A new way for scientists to tackle so-called negative results<figure><img src="https://images.theconversation.com/files/245434/original/file-20181113-194497-6etpxk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A new statistical test lets scientists figure out if two groups are similar to one another. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/brush-skeleton-dig-dinosaur-fossil-sedimentary-426848773?src=OQSTlxRXMh_sqKQNzj6Y4w-1-11">paleontologist natural/shutterstock.com</a></span></figcaption></figure><p>A paleontologist returns to her lab from a summer dig and sets up a study comparing tooth length in two dinosaur species. She and her team work meticulously to avoid biasing their results. They remain blind to the species while measuring, the sample sizes are large, and the data collection and the analysis are rigorous. </p>
<p>The scientist is surprised to find no significant difference in canine tooth length between the two species. She realizes that these unexpected results are important and sends a paper off to the appropriate journals. But journal after journal rejects the paper, since the results aren’t significantly different. Eventually, the scientist gives up, and the paper with its so-called negative results is placed in a drawer and buried under years of other work.</p>
<p>This scenario and many others like it have played out across all scientific disciplines, leading to what has been dubbed “<a href="https://undark.org/article/loss-of-confidence-project-replication-crisis/">the file drawer problem</a>.” Research journals and funding agencies are often biased toward research that shows “positive” or significantly different results. This unfortunate bias contributes to many other issues in the scientific process, such as <a href="http://dx.doi.org/10.1037/1089-2680.2.2.175">confirmation bias</a>, in which data are interpreted incorrectly to support a desired outcome. </p>
<h2>A new method: Equivalence</h2>
<p>Unfortunately, publication bias issues have been prevalent in science for a long time. Due to <a href="http://onlinestatbook.com/2/tests_of_means/difference_means.html">the structure of the scientific method</a>, scientists often focus only on differences between groups – like the dinosaur teeth from two different species, or a public health comparison of two different neighborhoods. This leaves studies that focus on similarities completely hidden. </p>
<p>However, <a href="https://doi.org/10.1016/0163-7258(94)90004-3">pharmaceutical trials</a> have found a solution for this problem. In these trials, researchers sometimes use a test known as TOST, two one sided test, to look for equivalence between treatments. </p>
<p>For example, say a company develops a generic drug that is cheaper to produce than the name-brand drug. Researchers need to demonstrate that the new drug functions in a statistically equivalent manner to the name brand before selling it on the market. That’s where equivalence testing comes in. If the test shows equivalence between the effects of the two drugs, then the FDA can approve the new drug’s release on the market. </p>
<p>While traditional equivalence testing is very helpful for preplanned and controlled pharmaceutical tests, it isn’t versatile enough for other types of studies. The original TOST cannot be used to test equivalence in experiments <a href="http://blog.minitab.com/blog/adventures-in-statistics-2/repeated-measures-designs-benefits-challenges-and-an-anova-example">where the same individuals are in multiple treatment groups</a>, nor does it work if the two tests groups have different sample sizes. </p>
<p>Additionally, the TOST used in pharmaceutical testing does not typically address multiple variables simultaneously. For example, a traditional TOST would be able to analyze similarities in biodiversity at several river locations before and after a temperature change. However, our new TOST would allow to test for similarities in multiple variables – such as biodiversity, water pH, water depth and water clarity – at all of the river sites simultaneously.</p>
<p>The limitations of the traditional TOST and the pervasiveness of the “file drawer problem” led our team <a href="https://doi.org/10.1016/j.anbehav.2018.09.004">to develop a multivariate equivalence test</a>, capable of addressing similarities in systems with repeated measures and unequal sample sizes. </p>
<p>Our new equivalence test, <a href="https://www.sciencedaily.com/releases/2018/10/181016150725.htm">published in October</a>, flips the traditional null hypothesis framework on its head. Now, rather than assuming similarity, a researcher starts with the assumption that the two groups are different. The burden of proof now lies with evaluating the degree of similarity, rather than the degree of difference. </p>
<p>Our test also allows researchers to set their own acceptable margin for declaring similarity. For example, if margin were set to 0.02, then the results would tell you if the means of the two groups were similar within plus or minus 2 percent.</p>
<h2>A step in the right direction</h2>
<p>Our modification means that equivalence testing can now be applied across a wide range of disciplines. For example, we used this test to demonstrate equivalent acoustic structure in the songs of male and female eastern bluebirds. Equivalence testing has also already been used in some areas of <a href="https://doi.org/10.1007/s11219-013-9196-0">engineering</a> and <a href="https://doi.org/10.1177%2F1948550617697177">psychology</a>. </p>
<p>The method could be applied even more broadly. Imagine a group of researchers who want to examine two different teaching methods. In one classroom there is no technology, and in another all of the students’ assignments are done online. Equivalence testing might help an school district decide if they should invest more in technology or if the two methods of teaching are equivalent. </p>
<p>The development of a broadly applicable equivalence test represents what we think will be a huge step forward in scientists’ long struggle to present real and unbiased results. This test provides another avenue for exploration and allows researchers to examine and publish the results from studies on similarities that have not been published or funded in the past. </p>
<p>The prevalence of publication bias, including the <a href="http://dx.doi.org/10.1037/0033-2909.86.3.638">file drawer problem</a>, confirmation bias and accidental <a href="https://projects.fivethirtyeight.com/p-hacking/">false positives</a>, is a major stumbling block for scientific progress. In some fields of research, up to half of results are missing from the published literature. </p>
<p>Equivalence testing provides another tool in the toolbox for scientists to present “positive” results. If the scientific community takes hold of this test and utilizes it to its full potential, we think it may help mitigate one of the major limitations in the way science is currently practiced. </p>
<p><em>This article has been updated to correct the margin of declaring similarity.</em></p><img src="https://counter.theconversation.com/content/106041/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Evangeline Shank receives funding from The Maryland Ornithological Society. </span></em></p><p class="fine-print"><em><span>Kevin Omland receives funding from the National Science Foundation and the American Bird Conservancy.</span></em></p><p class="fine-print"><em><span>Thomas Mathew receives funding from NIH (past funding). </span></em></p>A new statistical test lets researchers search for similarities between groups. Could this help keep new important findings out of the file drawer?Evangeline Rose, Ph.D Candidate in Biological Sciences, University of Maryland, Baltimore CountyKevin Omland, Professor of Biological Sciences, University of Maryland, Baltimore CountyThomas Mathew, Professor of Mathematics and Statistics, University of Maryland, Baltimore CountyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1034392018-09-21T04:18:38Z2018-09-21T04:18:38ZIs it time for Australia to be more open about research involving animals?<figure><img src="https://images.theconversation.com/files/237027/original/file-20180919-146148-qzqyzc.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Most people never have the chance to see how animals live in laboratories.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/rodents-raised-ivc-rodent-caging-system-532710010?src=UBG2OMQs3lQPDnebMHZEeA-1-11">from www.shutterstock.com </a></span></figcaption></figure><p>The use of animals in scientific research is a complex ethical issue, and these studies typically take place behind closed doors.</p>
<p>But since 2012, more than 120 of Britain’s universities, research institutions and pharmaceutical companies have signed a public pledge committing them to greater openness in their animal research programs.</p>
<p>The commmitment is called the <a href="http://concordatopenness.org.uk/">Concordat on Openness on Animal Research</a> – and there’s an argument to be made that a similar movement should be started in Australia. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-live-export-trade-is-unethical-it-puts-money-ahead-of-animals-pain-96849">The live export trade is unethical. It puts money ahead of animals' pain</a>
</strong>
</em>
</p>
<hr>
<h2>Pros and cons</h2>
<p>Crucial advances in fields such as medicine and psychology have occurred through clinical trials and experiments involving animals. Experiments on living subjects are inherently risky, and many would argue that it is better to impose the initial risk on non-human animals and only move on to human subjects after there is more evidence of safety. </p>
<p>Proponents of this approach sometimes appeal to the (controversial) idea that human beings have a higher moral status due to greater rational capacities that are <a href="http://admin.cambridge.org/academic/subjects/philosophy/political-philosophy/animals-issue-moral-theory-practice?format=AR">supposedly necessary for having rights</a>. </p>
<p>On the other hand, there have been numerous instances of animals being forced to endure extreme suffering for the sake of trivial findings, such as the infamous 1972 <a href="https://www.annualreviews.org/doi/abs/10.1146/annurev.me.23.020172.002203">learned helplessness experiments of Martin Seligman</a>, in which dogs were given repeated painful shocks. </p>
<p>Even if it is necessary to conduct at least some trials on living subjects, it’s not unreasonable to suggest that human beings should be the ones to bear the burdens of their own scientific pursuits. </p>
<p>Such a position draws support from the ethical intuition that a given quantity of suffering endured by any one individual is of <a href="https://www.hackettpublishing.com/the-methods-of-ethics">no more or less importance</a> than the equal suffering of any other (regardless of race, gender, or species). </p>
<p>And, unlike human beings, non-human animals are incapable of consenting. </p>
<h2>A history of hostility</h2>
<p>As with most contentious ethical issues, the apparent reasonableness of each side’s concerns can lead to hostility. <a href="https://www.nature.com/collections/mnzcndqhts">Animosity</a> between researchers and animal welfare advocates can make it all the more <a href="https://www.telegraph.co.uk/news/science/science-news/9142295/Medical-research-at-risk-due-to-animal-rights-activists.html">difficult to resolve their disagreements</a>. </p>
<p>Further, as activists become more vocal, scientists are motivated to be less open about their use of animals. This lack of transparency leaves the public less informed while fuelling distrust on the part of those who aim to protect animals’ interests. </p>
<p>It seems reasonable to aim for reduced hostility while searching for an arrangement that comes as close as possible to being morally tolerable for all parties to the debate. </p>
<p>A promising strategy along these lines has been implemented in the UK. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/animal-research-is-it-a-necessary-evil-95815">Animal research: is it a necessary evil?</a>
</strong>
</em>
</p>
<hr>
<h2>Concordat on openness</h2>
<p>The <a href="http://concordatopenness.org.uk/">Concordat on Openness in Animal Research in the UK</a> has led to substantial improvements in the way researchers engage with the public about their use of animals. </p>
<p>The pledge requires institutions to: </p>
<ul>
<li>be clear about when, how and why they use animals in research</li>
<li>enhance communications with the media and public</li>
<li>be proactive in providing opportunities for the public to learn about animal research</li>
<li>report annually on their experiences and share their practices.</li>
</ul>
<p>Some facilities even offer <a href="http://www.labanimaltour.org/">virtual lab tours</a>. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=299&fit=crop&dpr=1 600w, https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=299&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=299&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=376&fit=crop&dpr=1 754w, https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=376&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/237030/original/file-20180919-158240-9nyiir.jpeg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=376&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Oxford University provides a virtual tour of some animal research facilities.</span>
<span class="attribution"><a class="source" href="http://www.labanimaltour.org/oxford">Screen shot captured September 19, 2018.</a></span>
</figcaption>
</figure>
<p>Although openness is not enough to eliminate ethical concerns, it has the important benefit of preventing the public from assuming the worst when it comes to animal experiments. It also helps ensure that research institutions follow ethical guidelines. </p>
<p>For countries such as Australia that have strong regulations on animal research, it only makes sense to encourage a pledge of transparency similar to the UK Concordat. </p>
<h2>It won’t be easy</h2>
<p>Some researchers may not be eager to make such a pledge. One obvious interpretation of such reluctance would be that animals are being used in ways that are morally objectionable. </p>
<p>But hesitancy could also be motivated by concern that the public’s lack of understanding will obscure the potential benefits, and perhaps also make the treatment of animals seem more severe than it is. </p>
<p>However, this possibility is all the more reason for researchers to take the opportunity to explain themselves and educate the public. </p>
<p>Of course, doing so requires substantial time and effort. But these costs are outweighed by the potential improvements in relations among researchers and animal activists, as well as a more informed dialogue about these issues. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/237031/original/file-20180919-158240-doskj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Many drugs used in humans were first tested in animal trials.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/chemotherapy-171516044?src=u2m-DdiX6SCjK-Kw9vvVmg-1-3">from www.shutterstock.com</a></span>
</figcaption>
</figure>
<p>Secrecy only leads to more divisiveness and hostility, including possible direct action that can interfere with research. Lack of openness can also lead to a general lack of trust in scientific researchers among the general public, which is something that isn’t good for anyone. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/should-lab-grown-meat-be-labelled-as-meat-when-its-available-for-sale-93129">Should lab-grown meat be labelled as meat when it's available for sale?</a>
</strong>
</em>
</p>
<hr>
<p>Some animal activists might worry that a formal pledge of openness will be used as a shield in order to legitimise the use of animals in perpetuity. Perhaps the prevailing view will be that as long as researchers are transparent and follow regulations, there are no legitimate grounds for further protest. </p>
<p>However, given that animal experimentation is ongoing, the most promising route to reduce unnecessary suffering is to ensure openness. Rather than putting an end to the debate, transparency can carry it forward with more information and a higher degree of amicability. This would be an improved outcome for all parties involved, including the animals.</p><img src="https://counter.theconversation.com/content/103439/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tyler Paytas does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Since 2012, more than 120 of Britain’s universities, research institutions and pharmaceutical companies have signed a public pledge committing them to greater openness in their animal research programs.Tyler Paytas, Research Fellow in Philosophy, Australian Catholic UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1005792018-08-15T23:02:57Z2018-08-15T23:02:57ZThe real promise of LSD, MDMA and mushrooms for medical science<figure><img src="https://images.theconversation.com/files/229652/original/file-20180727-106496-1ceklm9.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Scientific pursuits need to be coupled with a humanist tradition — to highlight not just how psychedelics work, but why that matters. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Psychedelic science is making a comeback. </p>
<p>Scientific publications, therapeutic breakthroughs and cultural endorsements suggest that <a href="https://www.theglobeandmail.com/life/health-and-fitness/article-after-decades-of-dormancy-psychedelic-research-makes-a-comeback/">the historical reputation of psychedelics</a> — such as lysergic acid diethylamide (LSD), mescaline (from the peyote cactus) and psilocybin (mushrooms) — as dangerous or inherently risky have unfairly overshadowed a more optimistic interpretation. </p>
<p>Recent publications, like Michael Pollan’s <a href="https://www.penguinrandomhouse.com/books/529343/how-to-change-your-mind-by-michael-pollan/9781594204227/"><em>How to Change your Mind</em></a>, showcase the creative and potentially therapeutic benefits that psychedelics have to offer — for mental health challenges like depression and addiction, <a href="http://www.sciencemag.org/news/2016/12/hallucinogenic-drugs-help-cancer-patients-deal-%20their-fear-death">in palliative care settings</a> and for personal development. </p>
<p>Major scientific journals have published articles showing <a href="http://www.maps.org/resources/psychedelic-bibliography">evidence-based reasons for supporting research in psychedelic studies</a>. These include evidence that <a href="http://journals.sagepub.com/doi/full/10.1177/0269881116675513">pscilocybin significantly reduces anxiety in patients with life-threatening illnesses</a> like cancer, that MDMA (3,4-methylenedioxy-methamphetaminecan; also known as ecstasy) <a href="http://journals.sagepub.com/doi/abs/10.1177/0269881112464827">improves outcomes for people suffering from PTSD</a> and that <a href="http://journals.sagepub.com/doi/abs/10.1177/0269881111420188">psychedelics can produce sustained feelings of openness that are both therapeutic and personally enriching</a>. </p>
<p>Other researchers are investigating the traditional uses of plant medicines, such as ayahuasca, and exploring <a href="https://www.sciencedirect.com/science/article/abs/pii/S0361923016300454#!">the neurological and psychotherapeutic benefits of combining Indigenous knowledge with modern medicine</a>.</p>
<p>I am a medical historian, exploring why we now think that psychedelics may have a valuable role to play in human psychology, and why over 50 years ago, during the heyday of psychedelic research, we rejected that hypothesis. What has changed? What did we miss before? Is this merely a flashback?</p>
<h2>Healing trauma, anxiety, depression</h2>
<p>In 1957, the word <em>psychedelic</em> officially entered the English lexicon, introduced by <a href="https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/j.1749-6632.1957.tb40738.x">British-trained and Canadian-based psychiatrist Humphry Osmond</a>. </p>
<p>Osmond studied mescaline from the peyote cactus, synthesized by German scientists in the 1930s, and <a href="https://link.springer.com/chapter/10.1007%2F978-1-4614-0959-5_22">LSD, a laboratory-produced substance created by Albert Hofmann at Sandoz in Switzerland</a>. During the 1950s and into the 1960s, more than 1,000 scientific articles appeared as researchers around the world interrogated the potential of these psychedelics for healing addictions and trauma. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/229658/original/file-20180727-106496-gys7ad.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In this January 1967 file photo, Timothy Leary addresses a crowd of hippies at the ‘Human Be-In’ that he helped organize in Golden Gate Park, San Francisco, Calif.</span>
<span class="attribution"><span class="source">(AP Photo/Bob Klein)</span></span>
</figcaption>
</figure>
<p>But, by the end of the 1960s, most legitimate psychedelic research ground to a halt. Some of the research had been deemed unethical, <a href="http://books.wwnorton.com/books/The-Search-for-the-Manchurian-Candidate/">namely mind-control experiments conducted under the auspices of the CIA</a>. Other researchers had been discredited for either unethical or self-aggrandizing use of psychedelics, or both. </p>
<p><a href="https://www.salon.com/2013/12/14/timothy_learys_liberation_and_the_cias_experiments_lsds_amazing_psychedelic_history/">Timothy Leary was perhaps the most notorious character in that regard</a>. Having been dismissed from Harvard University, he launched a recreational career as a self-appointed apostle of psychedelic living. </p>
<p>Drug regulators struggled to balance a desire for scientific research with <a href="https://academic.oup.com/jhmas/article-abstract/69/2/221/748833">a growing appetite for recreational use, and some argued abuse, of psychedelics</a>. </p>
<p>In the popular media, <a href="http://www.saynotodrugs.in/facts-about-lsd/">these drugs came to symbolize hedonism and violence</a>. In the United States, <a href="https://circulatingnow.nlm.nih.gov/2017/03/30/lsd-insight-or-insanity-1968/">the government sponsored films aimed at scaring viewers about the long-term and even deadly consequences of taking LSD</a>. Scientists were hard-pressed to maintain their credibility as popular attitudes began to shift.</p>
<p>Now that interpretation is beginning to change.</p>
<h2>A psychedelics revival</h2>
<p>In 2009, <a href="https://www.theguardian.com/politics/2009/oct/30/drugs-adviser-david-nutt-sacked">Britain’s chief drug adviser, David Nutt, reported that psychedelic drugs had been unfairly prohibited</a>. He argued that substances such as alcohol and tobacco were in fact much more dangerous to consumers than drugs like LSD, ecstasy (MDMA) and mushrooms (psilocybin). </p>
<p>He was fired from his advisory position as a result, but <a href="https://www.sciencedirect.com/science/article/pii/S0140673610614626">his published claims helped to reopen debates on the use and abuse of psychedelics</a>, both in scientific and policy circles.</p>
<p>And Nutt was not alone. Several well-established researchers began joining the chorus of support for new regulations allowing researchers to explore and reinterpret the neuroscience behind psychedelics. Studies ranged from those <a href="http://www.pnas.org/content/113/17/4853.short">looking at the mechanisms of drug reactions</a> to those <a href="https://doi.org/10.1007/s00213-017-4771-x">revisiting the role of psychedelics in psychotherapy</a>.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=419&fit=crop&dpr=1 600w, https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=419&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=419&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=526&fit=crop&dpr=1 754w, https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=526&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/231973/original/file-20180814-2903-yn3fhf.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=526&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In this April 2010 photo, one gram of psilocybin is seen on a scale at New York University, where a study investigated the effects of hallucinogenic drugs on the emotional and psychological state of advanced cancer patients.</span>
<span class="attribution"><span class="source">(AP Photo/Seth Wenig)</span></span>
</figcaption>
</figure>
<p>In 2017, Oakland, Calif., hosted the largest gathering to date of psychedelic scientists and researchers. Boasting attendance of more than 3,000 participants, <a href="http://psychedelicscience.org/">Psychedelic Science 2017</a> brought together researchers and practitioners with a diverse set of interests in reviving psychedelics — from filmmakers to neuroscientists, journalists, psychiatrists, artists, policy advisers, comedians, historians, anthropologists, Indigenous healers and patients. </p>
<p>The conference was co-hosted by the leading organizations dedicated to psychedelics — <a href="http://www.maps.org/resources/psychedelic-bibliography">including the Multidisciplinary Association for Psychedelic Studies (MAPS)</a> and <a href="http://beckleyfoundation.org">The Beckley Foundation</a> — and participants were exposed to cutting-edge research.</p>
<h2>Measuring reaction, not experience</h2>
<p>As a historian, however, I am trained to be cynical about trends that claim to be new or innovative. We learn that often we culturally tend to forget the past, or ignore the parts of the past that seem beyond our borders. </p>
<p>For that reason, I am particularly interested in understanding the so-called psychedelic renaissance and what makes it different from the psychedelic heyday of the 1950s and 1960s.</p>
<p>The historic trials were conducted at the very early stages of the pharmacological revolution, which ushered in new methods for evaluating efficacy and safety, culminating in the randomized controlled trial (RCT). Prior to standardizing that approach, however, most pharmacological experiments relied on case reports and data accumulation that did not necessarily involve blinded or comparative techniques. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=392&fit=crop&dpr=1 600w, https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=392&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=392&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=492&fit=crop&dpr=1 754w, https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=492&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/231972/original/file-20180814-2894-1mobpn5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=492&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Shaman Pablo Flores pours ayahuasca into a plastic cup during a sacred ceremony in the Peruvian Jungle in May 2018.</span>
<span class="attribution"><span class="source">(AP Photo/Martin Mejia)</span></span>
</figcaption>
</figure>
<p>Historically, scientists were keen to separate pharmacological substances from their organic cultural, spiritual and healing contexts — the RCT is a classic representation of our attempts to measure reaction rather than to interpret experience. Isolating the drug from an associated ritual might have more readily conveyed an image of progress, or a more genuine scientific approach. </p>
<p>Today, however, psychedelic investigators are beginning <a href="https://www.theglobeandmail.com/opinion/the-profound-power-of-an-amazonian-plant-and-the-respect-it-demands/article27895775/">to question the decision to excise the drug from its Indigenous or ritualized practices</a>. </p>
<p>Over the past 60 years, we have invested more in psychopharmacological research than ever before. American economists estimate <a href="http://www.dx.doi.org/10.1111/j.1468-0009.2005.00347.x">the amount of money spent on psychopharmacology research to be in the billions annually</a>. </p>
<h2>Rethinking the scientific method</h2>
<p>Modern science has focused attention on data accrual — measuring reactions, identifying neural networks and discovering neuro-chemical pathways. It has moved decidedly away from larger philosophical questions of how we think, or what is human consciousness or how human thoughts are evolving. </p>
<p>Some of <a href="http://www.mqup.ca/psychedelic-prophets-products-9780773555068.php">those questions inspired the earlier generation of researchers to embark on psychedelic studies in the first place</a>.</p>
<p>We may now have more sophisticated tools for advancing the science of psychedelics. But psychedelics have always inspired harmony between brain and behaviour, individuals and their environments, and an appreciation for western and non-western traditions mutually informing the human experience. </p>
<p>In other words, scientific pursuits need to be coupled with a humanist tradition — to highlight not just how psychedelics work, but why that matters.</p><img src="https://counter.theconversation.com/content/100579/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Erika Dyck receives funding from Social Sciences and Humanities Council (Canada).</span></em></p>Once associated with mind-control experiments and counter-cultural defiance, psychedelics now show great promise for mental health treatments and may prompt a re-evaluation of the scientific method.Erika Dyck, Professor and Canada Research Chair in the History of Medicine, University of SaskatchewanLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/989952018-08-02T21:00:58Z2018-08-02T21:00:58ZCould machine learning mean the end of understanding in science?<figure><img src="https://images.theconversation.com/files/229132/original/file-20180724-194131-1tz8k8a.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Much to the chagrin of summer party planners, weather is a notoriously chaotic system. Small changes in precipitation, temperature, humidity, wind speed or direction, etc. can balloon into an entirely new set of conditions within a few days. That’s why weather forecasts become unreliable more than about seven days into the future — and why picnics need backup plans.</p>
<p>But what if we could understand a chaotic system well enough to predict how it would behave far into the future?</p>
<p>In January this year, scientists did just that. They <a href="https://doi.org/10.1103/PhysRevLett.120.024102">used machine learning to accurately predict the outcome of a chaotic system</a> over a much longer duration than had been thought possible. And the machine did that just by observing the system’s dynamics, without any knowledge of the underlying equations.</p>
<h2>Awe, fear and excitement</h2>
<p>We’ve recently become accustomed to artificial intelligence’s (AI) dazzling displays of ability. </p>
<p>Last year, a program called AlphaZero <a href="https://arxiv.org/abs/1712.01815">taught itself the rules of chess from scratch</a> in about a day, and then went on to beat the world’s best chess-playing programs. It also taught itself the game of Go from scratch and bettered the previous silicon champion, the algorithm <a href="https://doi.org/10.1038/nature24270">AlphaGo Zero</a>, which had itself mastered the game by trial and error after having been fed the rules.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/229121/original/file-20180724-194143-14k71rl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The behaviour of the Earth’s atmosphere is a classic example of chaos theory.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>Many of these algorithms begin with a blank slate of blissful ignorance, and rapidly build up their “knowledge” by observing a process or playing against themselves, improving at every step, thousands of steps each second. Their abilities have <a href="https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong">variously inspired feelings</a> of awe, fear and excitement, and we often hear these days about what <a href="https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/">havoc they may wreak</a> upon humanity. </p>
<p>My concern here is simpler: I want to understand what AI means for the future of “understanding” in science. </p>
<h2>If you predict it perfectly, do you understand it?</h2>
<p>Most scientists would probably agree that prediction and understanding are not the same thing. The reason lies in the origin myth of physics — and arguably, that of modern science as a whole. </p>
<p>For more than a millennium, the story goes, people used methods handed down by the Greco-Roman mathematician Ptolemy to predict how the planets moved across the sky. </p>
<p>Ptolemy didn’t know anything about the theory of gravity or even that the sun was at the centre of the solar system. His methods involved arcane computations using circles within circles within circles. While they predicted planetary motion rather well, there was no <em>understanding</em> of why these methods worked, and why planets ought to follow such complicated rules. </p>
<p>Then came Copernicus, Galileo, Kepler and Newton. </p>
<p>Newton discovered the fundamental differential equations that govern the motion of every planet. The same differential equations could be used to describe every planet in the solar system.</p>
<p>This was clearly good, because now we <em>understood</em> why planets move. </p>
<p>Solving differential equations turned out to be a more efficient way to predict planetary motion compared to Ptolemy’s algorithm. Perhaps more importantly, though, our trust in this method allowed us to discover new unseen planets based on a unifying principle — the <a href="https://www.theguardian.com/science/2013/oct/13/newtons-universal-law-of-gravitation">Law of Universal Gravitation</a> — that works on rockets and falling apples and moons and galaxies. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/229123/original/file-20180724-194124-prs65g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The Milky Way Galaxy, which contains our solar system.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>This basic template — finding a set of equations that describe a unifying principle — has been used successfully in physics again and again. This is how we figured out the <a href="https://home.cern/about/physics/standard-model">Standard Model</a>, the culmination of half a century of particle physics, which accurately describes the underlying structure of every atom, nucleus or particle. It is how we are trying to understand high-temperature superconductivity, dark matter and quantum computers. (The <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.3160130102">unreasonable effectiveness</a> of this method has inspired questions about why the universe seems to be so delightfully amenable to a mathematical description.)</p>
<p>In all of science, arguably, the notion of understanding something always refers back to this template: If you can boil a complicated phenomenon down to a simple set of principles, then you have understood it. </p>
<h2>Stubborn exceptions</h2>
<p>However there are annoying exceptions that spoil this beautiful narrative. Turbulence — one of the reasons why weather prediction is difficult — is a notable example from physics. The vast majority of problems from biology, with their intricate structures within structures, also stubbornly refuse to give up simple unifying principles. </p>
<p>While there is no doubt that atoms and chemistry, and therefore simple principles, underlie these systems, describing them using universally valid equations appears to be a rather inefficient way to generate useful predictions. </p>
<p>In the meantime, it is becoming evident that these problems will easily yield to <a href="https://doi.org/10.1038/nature14539">machine-learning methods</a>. </p>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=628&fit=crop&dpr=1 600w, https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=628&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=628&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=789&fit=crop&dpr=1 754w, https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=789&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/229124/original/file-20180724-194158-1au4jk4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=789&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">AI might help identify new drugs to treat antibiotic resistant bacterial like Klebsiella, which causes about 10 per cent of all hospital-acquired infections in the United States.</span>
<span class="attribution"><span class="source">NIH</span></span>
</figcaption>
</figure>
<p>Just as the ancient Greeks sought answers from the mystical <a href="http://sp.lyellcollection.org/content/171/1/399.short">Oracle of Delphi</a>, we may soon have to seek answers to many of science’s most difficult questions by appealing to AI oracles.</p>
<p>Such AI oracles are already guiding self-driving cars and stock market investments, and may soon predict which drugs will be effective against a bacterium — and <a href="https://dl.acm.org/citation.cfm?id=2783275">what the weather will look like two weeks ahead</a>. </p>
<p>They will make these predictions much better than we ever could have, and they will do it without recourse to our mathematical models and equations. </p>
<p>It is not inconceivable that, armed with <a href="https://home.cern/topics/large-hadron-collider">data from billions of collisions at the Large Hadron Collider</a>, they might do a better job at predicting the outcome of a particle physics experiment than even physicists’ beloved Standard Model!</p>
<p>As with the inscrutable utterances of the priestesses of Delphi, our AI oracles are also unlikely to be able to explain <em>why</em> they predict what they do. Their outputs will be based on many microseconds of what might be called “experience.” They resemble that caricature of an uneducated farmer who can perfectly predict which way the weather will turn, based on experience and a gut feeling. </p>
<h2>Science without understanding?</h2>
<p>The implications of machine intelligence, for the process of doing science and for the philosophy of science, could be immense. </p>
<p>For example, in the face of increasingly flawless predictions, albeit obtained by methods that no human can understand, can we continue to deny that machines have better knowledge? </p>
<p>If prediction is in fact the primary goal of science, how should we modify the <em>scientific method</em>, the algorithm that for centuries has allowed us to identify errors and correct them?</p>
<p>If we give up on understanding, is there a point to pursuing scientific knowledge as we know it?</p>
<p>I don’t have the answers. But unless we can articulate why science is about more than the ability to make good predictions, scientists might also soon find that a “trained AI could do their job.”</p><img src="https://counter.theconversation.com/content/98995/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Amar Vutha acknowledges support from a Branco Weiss Fellowship, administered by ETH Zurich. </span></em></p>Machine learning is changing the world in ways that we are just beginning to appreciate. But could it change the way we do science and the reasons why we do science?Amar Vutha, Assistant Professor of Physics, University of TorontoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/993652018-07-12T20:02:44Z2018-07-12T20:02:44ZWhen to trust (and not to trust) peer reviewed science<figure><img src="https://images.theconversation.com/files/227100/original/file-20180711-27021-ndukml.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Academic journals rely on peer review to support editors in making decisions about what to publish. </span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/two-scientists-lab-journal-discussing-results-104042246?src=l2YWk81Y4C4ESJbrfppTPg-1-0">from www.shutterstock.com </a></span></figcaption></figure><p><em>The article is part of our occasional long read series <a href="https://theconversation.com/au/topics/zoom-out-51632">Zoom Out</a>, where authors explore key ideas in science and technology in the broader context of society.</em></p>
<hr>
<p>The words “published in a peer reviewed journal” are sometimes considered as the gold standard in science. But any professional scientist will tell you that the fact an article has undergone peer review is a long way from an ironclad guarantee of quality.</p>
<p>To know what science you should <em>really</em> trust you need to weigh the subtle indicators that scientists consider. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-i-disagree-with-nobel-laureates-when-it-comes-to-career-advice-for-scientists-80079">Why I disagree with Nobel Laureates when it comes to career advice for scientists</a>
</strong>
</em>
</p>
<hr>
<h2>Journal reputation</h2>
<p>The standing of the journal in which a paper is published is the first thing. </p>
<p>For every scientific field, broad journals (like <a href="https://www.nature.com/nature/">Nature</a>, <a href="http://www.sciencemag.org/">Science</a> and <a href="http://www.pnas.org/">Proceedings of the National Academy of Science</a>) and many more specialist journals (like the <a href="http://www.jbc.org/">Journal of Biological Chemistry</a>) are available. But it is important to recognise that hierarchies exist. </p>
<p>Some journals are considered more prestigious, or frankly, better than others. The “<a href="https://researchguides.uic.edu/if/impact">impact factor</a>” (which reflects how many citations papers in the journal attract) is one simple, if controversial measure, of the importance of a journal. </p>
<p>In practice every researcher carries a mental list of the top relevant journals in her or his head. When choosing where to publish, each scientist makes their own judgement on how interesting and how reliable their new results are. </p>
<p>If authors aim too high with their target journal, then the editor will probably reject the paper at once on the basis of “interest” (before even considering scientific quality).</p>
<p>If an author aims too low, then they could be selling themselves short – this could represent a missed opportunity for a trophy paper in a top journal that everyone would recognise as significant (if only because of where it was published). </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-just-available-but-also-useful-we-must-keep-pushing-to-improve-open-access-to-research-86058">Not just available, but also useful: we must keep pushing to improve open access to research</a>
</strong>
</em>
</p>
<hr>
<p>Researchers sometimes talk their paper up in a cover letter to the editor, and aim for a journal one rank above where they expect the manuscript will eventually end up. If their paper is accepted they are happy. If not, they resubmit to a lower ranked, or in the standard euphemism, a “more specialised journal”. This wastes time and effort, but is the reality of life in science.</p>
<p>Neither editors nor authors like to get things wrong. They are weighing up the pressure to break a story with a big headline against the fear of making a mistake. A mistake in this context means publishing a <a href="http://science.sciencemag.org/content/273/5277/924">result</a> that becomes quickly embroiled in <a href="https://www.space.com/33690-allen-hills-mars-meteorite-alien-life-20-years.html">controversy</a>. </p>
<p>To safeguard against that, three or four peer reviewers (experienced experts in the field) are appointed by the editor to help.</p>
<h2>The peer review process</h2>
<p>At the time of submitting a paper, the authors may suggest reviewers they believe are appropriately qualified. But the editor will make the final choice, based on their understanding of the field and also on how well and how quickly reviewers respond to the task.</p>
<p>The identity of peer reviewers is usually kept secret so that they can comment freely (but sometimes this means they are quite harsh). The peer reviewers will repeat the job of the editor, and advise on whether the paper is of sufficient interest for the journal. Importantly, they will also evaluate the robustness of the science and whether the conclusions are supported by the evidence.</p>
<p>This is the critical “peer review” step. In practice, though, the level of scrutiny remains connected to the standing of the journal. If the work is being considered for a top journal, the scrutiny will be intense. The top journals seldom accept papers unless they consider them to be not only interesting but also water tight and bullet proof – that is they believe the result is something that will stand the test of time.</p>
<p>If, on the other hand, the work is going into a little-read journal with a low impact factor, then sometimes reviewers will be more forgiving. They will still expect scientific rigour but are likely to accept some data as inconclusive, provided the researchers point out the limitations of their work. </p>
<p>Knowing this is how the process goes, whenever a researcher reads a paper they make a mental note of where the work was published. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-was-missing-in-australias-1-9-billion-infrastructure-announcement-96723">What was missing in Australia's $1.9 billion infrastructure announcement</a>
</strong>
</em>
</p>
<hr>
<h2>Journal impact factor</h2>
<p>Most journals are reliable. But at the bottom of the list in terms of impact lie two types of journals: </p>
<ol>
<li><p>respectable journals that publish peer reviewed results that are solid but of limited interest – since they may represent dead ends or very specialist local topics</p></li>
<li><p>so-called “predatory” journals, which are more sinister – in these journals the peer review process is either superficial or non-existent, and editors essentially charge authors for the privilege of publishing.</p></li>
</ol>
<p>Professional scientists will distinguish between the two partly based on the publishing house, and even the name of the journal. </p>
<p>The Public Library of Science (<a href="https://www.plos.org/">PLOS</a>) is a reputable publisher, and offers <a href="http://journals.plos.org/plosone/">PLOS ONE</a> for solid science – even if it may only appeal to a limited audience. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/universities-spend-millions-on-accessing-results-of-publicly-funded-research-88392">Universities spend millions on accessing results of publicly funded research</a>
</strong>
</em>
</p>
<hr>
<p><a href="https://www.springernature.com/gp/">Springer Nature</a> has launched a similar journal called <a href="https://www.nature.com/srep/">Scientific Reports</a>. Other good quality journals with lower impact factors include journals of specialist academic societies in countries with smaller populations – they will never reach a large audience but the work may be rock solid. </p>
<p>Predatory journals on the other hand are often broad in scale, published by online publishers managing many titles, and sometimes have the word “international” in the title. They are seeking to harvest large numbers of papers to maximise profits. So names like “The International Journal of Science” should be treated with caution, whereas the “Journal of the Australian Bee Society” may well be reliable (note, I invented these names just to illustrate the point). </p>
<h2>The value of a journal vs a single paper</h2>
<p>Impact factors have become controversial because they have been overused as a proxy for the quality of single papers. However, strictly applied they reflect only the interest a journal attracts, and may depend on a few “jackpot” papers that “go viral” in terms of accumulating citations. </p>
<p>Additionally, while papers in higher impact journals may have undergone more scrutiny, there is more pressure on the editors and on the authors of these top journals. This means shortcuts may be taken more often, the last, crucial control experiment may never be done, and the journals end up being less reliable than their reputations imply. This disconnect sometimes generates sniping about how certain journals aren’t as good as they claim to be – which actually keeps everyone on their toes.</p>
<p>While all the controversies surrounding impact factors are real, every researcher knows and thinks about them or other journal ranking systems (SNP – Source Normalised Impact per Paper, SJR – Scientific Journal Rankings, and others) when they are choosing which journal to publish in, which papers to read, and which papers to trust. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/science-isnt-broken-but-we-can-do-better-heres-how-95139">Science isn't broken, but we can do better: here's how</a>
</strong>
</em>
</p>
<hr>
<h2>Nothing is perfect</h2>
<p>Even if everything is done properly, peer review is not infallible. If authors fake their data very cleverly, for example, then it may be difficult to detect. </p>
<p>Deliberately faking data is, however, relatively rare. Not because scientists are saints but because it is foolish to fake data. If the results are important, others will quickly try to reproduce and build upon them. If a fake result is published in a top journal it is almost certain to be discovered. This does happen from time to time, and it is always a <a href="https://www.nature.com/news/stem-cell-scientist-found-guilty-of-misconduct-1.14974">scandal</a>.</p>
<p>Errors and sloppiness are much more common. This may be related to the increasing urgency, pressure to publish and prevalence of large teams where no one may understand all the science. Again, however, only inconsequential mistakes will survive – most important <a href="https://www.nature.com/news/neutrinos-not-faster-than-light-1.10249">errors</a> will quickly be picked up.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/not-just-available-but-also-useful-we-must-keep-pushing-to-improve-open-access-to-research-86058">Not just available, but also useful: we must keep pushing to improve open access to research</a>
</strong>
</em>
</p>
<hr>
<h2>Can you trust the edifice that is modern science?</h2>
<p>Usually, one can get a feel for how likely it is that a piece of peer reviewed science is solid. This comes through relying on the combination of the pride and the reputation of the authors, and of the journal editors, and of the peer reviewers. </p>
<p>So I do trust the combination of peer review system and the inherent fact that science is built on previous foundations. If those are shaky, the cracks will appear quickly and things will be set straight.</p>
<p>I am also heartened by <a href="https://theconversation.com/peer-review-has-some-problems-but-the-science-community-is-working-on-it-99596">new opportunities</a> for even better and faster systems that are arising as a result of advances in information technology. These include models for post-publication (rather than pre-publication) peer review. Perhaps this creates a way to formalise discussions that would otherwise happen on Twitter, and that can raise doubts about the validity of published results.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/bored-reading-science-lets-change-how-scientists-write-81688">Bored reading science? Let's change how scientists write</a>
</strong>
</em>
</p>
<hr>
<p>The journal <a href="https://elifesciences.org/">eLife</a> is turning peer review on its head. It’s offering to publish everything it deems to be of sufficient interest, and then letting authors choose to answer or not answer points that are raised in peer review after acceptance of the manuscript. Authors can even choose to refrain from going ahead if they think the peer reivewers’ points expose the work as flawed. </p>
<p>ELife also has a system where reviewers get together and provide a single moderated review, to which their names are appended and which is published. This prevents the problem of anonymity enabling overly harsh treatment. </p>
<p>All in all, we should feel confident that important science is solid (and peripheral science unvalidated) due to peer review, transparency, scrutiny and reproduction of results in science publication. Nevertheless in some fields where reproduction is rare or impossible – long term studies depending on complex statistical data – it is likely that scientific debate will continue. </p>
<p>But even in these fields, the endless scrutiny by other researchers, together with the proudly guarded reputations of authors and journals, means that even if it will never be perfect, the scientific method remains more reliable than all the others.</p><img src="https://counter.theconversation.com/content/99365/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Merlin Crossley receives funding from the Australian Research Council and the National Health and Medical Research Council. He works at UNSW Sydney, and is on the Trust of the Australian Museum, the Boards of the Australian Science Media Centre, UNSW Press and UNSW Global. </span></em></p>There’s peer review – and then there’s peer review. With more knowledge you can dive in a little deeper and make a call about how reliable a science paper really is.Merlin Crossley, Deputy Vice-Chancellor Academic and Professor of Molecular Biology, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.