tag:theconversation.com,2011:/global/topics/randomised-control-trial-18894/articlesRandomised control trial – The Conversation2019-12-09T08:19:24Ztag:theconversation.com,2011:article/1283982019-12-09T08:19:24Z2019-12-09T08:19:24ZHow randomised trials became big in development economics<figure><img src="https://images.theconversation.com/files/305633/original/file-20191206-90574-s3efbs.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Esther Duflo (L) and Abhijit Banerjee, who, along with Michael Kremer (not pictured), won the 2019 Nobel Prize in Economic Sciences.</span> <span class="attribution"><span class="source">EPA/CJ Gunther</span></span></figcaption></figure><p>The 2019 Nobel Prize in economics was awarded to three researchers <a href="https://www.nobelprize.org/prizes/economic-sciences/2019/advanced-information/">for</a> “their experimental approach to alleviating global poverty”, one which has “transformed development economics”. </p>
<p>What are randomised experiments? And why have they became so influential in development economics? </p>
<p>Improving the quality of life, particularly for the poor, is considered to be one of the main objectives of modern societies. Doing so requires a certain level of wealth. Economists have been preoccupied for centuries with understanding why some nations have “developed” economically and others have not. </p>
<p>But a more immediate question is: what can be done in the present? More specifically, what policies should less-developed countries adopt to improve the lives of their citizens?</p>
<p>Development economics as a subdiscipline was born in the 1940s and 1950s. For pioneers such as <a href="https://www.iss.nl/en/about-iss/organization/honorary-fellows/paul-rosenstein-rodan">Paul Rosenstein-Rodan</a>,<a href="https://www.nobelprize.org/prizes/economic-sciences/1979/lewis/biographical/"> W. Arthur Lewis</a> and <a href="https://en.wikipedia.org/wiki/Albert_O._Hirschman">Albert Hirschman</a>, development economics meant studying ways in which poor countries could attain large-scale societal transformation. They used a variety of analytical approaches, which pointed towards industrialisation as the driver of development.</p>
<p>Arthur Lewis, a native of Saint Lucia, was awarded the <a href="https://www.nobelprize.org/prizes/economic-sciences/1979/lewis/facts/">1979 Nobel Prize</a> in economics. This was in recognition of his study of the ways in which countries with unlimited supplies of labour – as is the case for many developing countries – could attain economic development.</p>
<p>One view of the challenge of development is that it is fundamentally about answering causal questions. If a country adopts a particular policy, will that cause an increase in economic growth, a reduction in poverty or some other improvement in the well-being of citizens?</p>
<p>In recent decades economists have been concerned about the reliability of <a href="https://www.degruyter.com/view/j/jgd.2015.6.issue-1/jgd-2014-0005/jgd-2014-0005.xml">previously used methods</a> for identifying causal relationships. In addition to those methodological concerns, some have argued that “grand theories of development” are either incorrect or at least have failed to yield meaningful improvements in many developing countries. </p>
<p>Two notable examples are the idea that developing countries may be caught in a poverty trap that requires a “<a href="https://link.springer.com/article/10.1007/s10887-006-9006-7">big push</a>” to escape and the view that <a href="https://link.springer.com/chapter/10.1007/978-3-540-69305-5_25">institutions</a> are key for growth and development.</p>
<p>These concerns about methods and policies provided a fertile ground for randomised experiments in development economics. The surge of interest in experimental approaches in economics began in the early 1990s. Researchers began to use “natural experiments”, where for example random variation was <a href="https://www.jstor.org/stable/2006669">part of a policy</a> rather than decided by a researcher, to look at causation. </p>
<p>But it really gathered momentum in the 2000s, with researchers such as the Nobel awardees designing and implementing experiments to study a wide range of microeconomic questions. </p>
<h2>Randomised trials</h2>
<p>Proponents of these methods <a href="https://economics.mit.edu/faculty/eduflo/pooreconomics">argued</a> that a focus on “small” problems was more likely to succeed. They also argued that randomised experiments would <a href="https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3">bring credibility</a> to economic analysis by providing a simple solution to causal questions.</p>
<p>These experiments randomly allocate a treatment to some members of a group and compare the outcomes against the other members who did not receive treatment. For example, to test whether providing credit helps to grow small firms or increase their likelihood of success, a researcher might partner with a financial institution and randomly allocate credit to applicants that meet certain basic requirements. Then a year later the researcher would compare changes in sales or employment in small firms that received the credit to those that did not. </p>
<p>Randomised trials are not a new research method. They are best known for their use in testing new medicines. The first medical experiment to use controlled randomisation <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3149409/">occurred</a> in the aftermath of the second world war. The British government used it to assess the effectiveness of a drug for <a href="https://doi.org/10.1258/jrsm.2011.11k023">tuberculosis treatment</a>. </p>
<p>In the <a href="https://journals.sagepub.com/doi/pdf/10.1177/0011128700046003004?casa_token=XbxU7ET54eIAAAAA:RorDHa1w1n7bdaUVD79pX7jsrB6Tflu-Q94FXQVaAK7Sb59ChXQ9C2dhxIPq4Z1Tj-y6sXp7paEpmA">early 20th century</a> and <a href="https://journals.sagepub.com/doi/10.1177/0193841X7800200411">mid-20th century</a> American researchers had used experiments like this to examine the effects of various social policies. Examples included income protection and social housing.</p>
<p>The introduction of these methods into development economics also followed an increase in their use in <a href="https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3">other areas of economics</a>. One example was the study of labour markets. </p>
<p>Randomised control trials in economics are now mostly used to evaluate the impact of social policy interventions in poor and middle-income countries. Work by the 2019 Nobel awardees – Michael Kremer, Abhijit Banerjee and Esther Duflo – includes experiments in Kenya and India on <a href="https://www.aeaweb.org/articles?id=10.1257/aer.102.4.1241">teacher attendance</a>, <a href="https://www.aeaweb.org/articles?id=10.1257/app.1.1.112">textbook provision</a>, <a href="https://academic.oup.com/jeea/article-abstract/6/2-3/487/2295851">monitoring of nurse attendance</a> and the <a href="https://www.aeaweb.org/articles?id=10.1257/app.20140287">provision of microcredit</a>.</p>
<p>The popularity, among academics and policymakers, of the approach is not only due to its seeming ability to solve methodological and policy concerns. It is also due to very deliberate, well-funded advocacy by its proponents.</p>
<h2>Big funders</h2>
<p>A key actor is the Abdul Latif Jameel Poverty Action Lab (J-PAL), which was founded by Duflo and Banerjee. Since its creation in 2003, J-PAL <a href="https://www.jstor.org/stable/pdf/26491530.pdf">has conducted</a> 876 policy experiments in 80 countries. <a href="https://www.amazon.fr/L%C3%A9conomie-comportementale-question-Jean-Michel-Servet/dp/2843772087">One estimate</a> suggests that it received around $300 million between 2003 and 2018, from a range of institutions. These include the World Bank, Britain’s Department for International Development and the Bill and Melinda Gates Foundation.</p>
<p>J-PAL appears to <a href="https://doi.org/10.1111/dech.12378">have been influential</a> in the World Bank’s decision in 2005 to establish a dedicated impact evaluation unit composed of former J-PAL associates to conduct randomised trials. The number of such experiments used in World Bank evaluations <a href="https://doi.org/10.1111/dech.12378">increased</a> from zero in 2000 to just over two thirds of all evaluations in 2010.</p>
<p>These changes have taken place in a broader international context where there is increased emphasis on “<a href="https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/3683.pdf">evidence-based policy</a>”. The idea is that policy decisions should be based on “objective”, “rigorous” and “rational” information and analysis. Although this may seem obvious, there are many debates about what this actually means in practice.</p>
<p>In this context, the advocates of randomised trials have sought to position their preferred methods as the most reliable form of evidence.</p>
<p>But there is strong opposition to the approach. This <a href="https://www.sciencedirect.com/science/article/pii/S0277953617307359">argues that</a> experiments are not as reliable or useful as they are made out to be. Some even say that many such experiments <a href="https://ore.exeter.ac.uk/repository/bitstream/handle/10871/17048/EthicsRCT.pdf">fail to satisfy ethical principles</a> and may actually <a href="https://africasacountry.com/2019/10/the-poverty-of-poor-economics">harm</a> development efforts.</p>
<p><em>This is the first article in a two-part series on randomised trials. The next will argue that the usefulness and appropriateness of randomised trials for development questions has been severely overstated.</em></p><img src="https://counter.theconversation.com/content/128398/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Seán Mfundza Muller receives funding from a European Union-funded project, "Putting People back in Parliament", led by the Dullah Omar Institute (University of the Western Cape), in collaboration with the Parliamentary Monitoring Group, Public Service Accountability Monitor (Rhodes) and Heinrich Boell Foundation (South Africa). He is affiliated with the Public and Environmental Economics Research Centre (University of Johannesburg), regularly making inputs to Parliament oversight of the national budget, advising civil society groups on public finance matters and consulting for private sector organisations on an ad hoc basis. He resigned from the South African Parliamentary Budget Office in 2016. The views expressed are his own.</span></em></p><p class="fine-print"><em><span>Grieve Chelwa has received funding from the DG Murray Trust for work on the economics of alcohol consumption in South Africa. He has also received funding from the Bill and Melinda Gates Foundation, the American Cancer Society and the International Development Research Center 9IDRC) for work on the economics of tobacco control on the African continent. And lastly, he's received funding from the International Growth Center at LSE and the Volkswagen Foundation for his work on the economics of education in Zambia. </span></em></p><p class="fine-print"><em><span>Nimi Hoffmann has received funding from the South African Department of Basic Education to help conduct a nationally representative survey of teachers. She has also received funding from the European Commission to conduct a tracer study of schools for children displaced by conflict and climate collapse in Somalia and Ethiopia. She is a lecturer at the Centre for International Education at the University of Sussex and a research fellow at the Centre for International Education, Cape Peninsula University of Technology.</span></em></p>The surge of interest in experimental approaches in economics began in the early 1990s.Seán Mfundza Muller, Senior Lecturer in Economics, Research Associate at the Public and Environmental Economics Research Centre (PEERC) and Visiting Fellow at the Johannesburg Institute of Advanced Study (JIAS), University of JohannesburgGrieve Chelwa, Senior Lecturer -- Economics, University of Cape TownNimi Hoffmann, Lecturer in International Education, University of SussexLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1178362019-10-09T19:01:53Z2019-10-09T19:01:53ZIs this study legit? 5 questions to ask when reading news stories of medical research<figure><img src="https://images.theconversation.com/files/296114/original/file-20191009-3935-yjqvtr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It can be difficult to work out whether you should believe a study's reported findings.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/cropped-image-male-freelancer-sitting-table-262335083">GaudiLab/Shutterstock</a></span></figcaption></figure><p>Who doesn’t want to know if drinking that second or third cup of coffee a day will improve your memory, or if sleeping too much increases your risk of a heart attack? </p>
<p>We’re invested in staying healthy and many of us are interested in reading about new research findings to help us make sense of our lifestyle choices. </p>
<p>But not all research is equal, and not every research finding should be interpreted in the same way. Nor do all media headlines reflect what was actually studied or found. </p>
<p>So how can you tell? Keep these five questions in mind when you’re reading media stories about new studies.</p>
<h2>1. Has the research been peer reviewed?</h2>
<p>Peer review is a process by which a study is checked by experts in the discipline to assess the study’s scientific validity.</p>
<p>This process involves the researcher writing up their study methods and results, and sending this to a journal. The manuscript is then usually sent to two to three experts for peer review.</p>
<p>If there are major flaws in a study, it’s either rejected for publication, or the researchers are made to address these flaws. </p>
<p>Although the peer-review process isn’t perfect, it shows a study has been subjected to scrutiny. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/peer-review-has-some-problems-but-the-science-community-is-working-on-it-99596">Peer review has some problems – but the science community is working on it</a>
</strong>
</em>
</p>
<hr>
<p>Any reported findings that haven’t been peer reviewed should be read with a degree of reservation.</p>
<h2>2. Was the study conducted in humans?</h2>
<p>Findings from studies conducted in animals such as mice or on cells in a lab (also called <em>in vitro</em> studies) represent the earliest stage of the scientific discovery process. </p>
<p>Regardless of how intriguing they may be, no confident claims about human health should ever be made based on these types of study alone. There is no guarantee that findings from animal or cell studies will ever be replicated in humans.</p>
<h2>3. Are findings likely to represent a causal relationship?</h2>
<p>For a study to have relevance to our day-to-day health, the findings need to reflect a <em>causal</em> relationship rather than just a <em>correlation</em>. </p>
<p>If a study showed that coffee drinking was associated with heart disease, for example, we want to know if this was because coffee actually <em>caused</em> heart disease or whether these to things happened to occur together.</p>
<p>In a number of studies that found this association, researchers <a href="https://www.ncbi.nlm.nih.gov/pubmed/18328848">subsequently found</a> that coffee drinkers were more likely to be smokers and therefore, these results were more likely to reflect a true causal relationship between smoking and heart disease. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/296132/original/file-20191009-3887-1odwcjl.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Just because something is common among coffee drinkers, doesn’t mean coffee caused it.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/female-hands-holding-cups-coffee-on-261247157?src=DYbOCy048cq5gPoe7MdxSA-1-20">Africa Studio/Shutterstock</a></span>
</figcaption>
</figure>
<p>In observational studies, where researchers observe differences in groups of people, it can sometimes be difficult to disentangle the relationship between variables.</p>
<p>The highest level of evidence regarding causality comes from double-blind placebo controlled randomised controlled trials (RCTs). This experimental type of study, where people are separated into groups to randomly receive either an intervention or placebo (sham treatment), is the best way we can determine if a something causes disease. However it, too, is not perfect. </p>
<p>Although other types of studies in humans play an important role in our understanding of health and disease, they may only highlight associations that are not indicative of causal relationships.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/clearing-up-confusion-between-correlation-and-causation-30761">Clearing up confusion between correlation and causation</a>
</strong>
</em>
</p>
<hr>
<h2>4. What is the size of the effect?</h2>
<p>It’s not enough to know that an exposure (such a third cup of coffee or more than nine hours of sleep a night) causes an outcome, it’s also important to clearly understand the strength of this relationship. In other words, how much is your risk of disease going to increase if you are exposed?</p>
<p>If your risk of disease is reported to increase by 50% (which is a <em>relative</em> risk), this sounds quite frightening. However, if the original risk of disease is low, then a 50% increase in your risk may not represent a big actual increased risk of disease. A 50% increased risk of disease could mean going from a 0.1% risk of disease to your risk being 0.15%, which doesn’t sound quite so dramatic.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-you-need-to-know-to-understand-risk-estimates-67643">What you need to know to understand risk estimates</a>
</strong>
</em>
</p>
<hr>
<h2>5. Is the finding corroborated by other studies?</h2>
<p>A single study on its own, even if it’s a well-conducted randomised controlled trial, can never be considered definitive proof of a causal relationship between an exposure and disease.</p>
<p>As humans are complex and there are so many variables in any study, we can’t be confident we understand what is actually going on until findings are replicated in many different groups of people, using many different approaches. </p>
<p>Until we have a significant body of evidence that is in agreement, we have to be very careful about our interpretation of the findings from any one study.</p>
<h2>What if these questions aren’t answered?</h2>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/296133/original/file-20191009-3880-153el8k.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Switch news sites or try to see the original study.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/serious-woman-using-laptop-checking-email-1156208407?src=LnIBx_qkb1znqLKD0vsXKg-1-57">Fizkes/Shutterstock</a></span>
</figcaption>
</figure>
<p>If the media report you’re reading doesn’t answer these questions, consider changing news sites or looking at the original paper. Ideally this would be linked in the news article you’re reading, or you can search <a href="https://www.ncbi.nlm.nih.gov/pubmed/">PubMed</a> for the article using a few keywords.</p>
<p>The journal article’s abstract should tell you the type of study, whether it was conducted on humans and the size of the effect. If you’re not blocked by a paywall, you may be able to view the full journal article which should answer all of the questions you have about the study.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/wheres-the-proof-in-science-there-is-none-30570">Where's the proof in science? There is none</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/117836/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Wondering if that latest study finding is too good to be true, or whether it’s as bad as we’re told? Here are five questions to ask to help you assess the evidence.Hassan Vally, Associate Professor, La Trobe UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/932822018-03-15T05:12:31Z2018-03-15T05:12:31ZSpeaking with: Andrew Leigh on why we need more randomised trials in policy and law<figure><img src="https://images.theconversation.com/files/210870/original/file-20180317-104635-kisec6.jpeg?ixlib=rb-1.1.0&rect=0%2C0%2C1175%2C1177&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">AndrewLeigh.com</span>, <span class="license">Author provided</span></span></figcaption></figure><p><a href="https://theconversation.com/randomised-control-trials-what-makes-them-the-gold-standard-in-medical-research-78913">Randomised controlled trials</a> are the gold standard in medical research. Researchers divide participants into two groups using the equivalent of flipping a coin, with one group getting a new treatment and a control group getting either the standard treatment or a placebo. It’s the best way to prove that a new treatment works.</p>
<p>But the benefits of randomised trials aren’t limited to medical applications. Big businesses – like Amazon, Google, <a href="https://theconversation.com/facebook-will-continue-experimenting-on-users-under-closed-guidelines-32510">Facebook</a> and even media organisations – are increasingly <a href="https://theblog.okcupid.com/we-experiment-on-human-beings-5dd9fe280cd5">using randomised trials</a> to test designs and processes that increase their engagement with users and customers. Every time you Google something you’re probably participating in a randomised trial.</p>
<p>And that world of randomisation is the subject of Andrew Leigh’s new book, <a href="https://www.blackincbooks.com.au/books/randomistas">Randomistas: How radical researchers changed our world</a>. Leigh is the current federal member for Fenner, and Labor’s shadow assistant treasurer. But prior to his political life he was a professor of economics at Australian National University.</p>
<p>He spoke with the University of Melbourne’s Fiona Fidler about how we should be using randomised trials more to drive decisions and policy in public life and why we might be missing out on better results in social policy because we’re afraid to test our assertions.</p>
<hr>
<p><em>Andrew Leigh’s <a href="https://www.blackincbooks.com.au/books/randomistas">Randomistas: How radical researchers changed our world</a> is out now from Black Inc books. His podcast on living a health, happy and ethical life, The Good Life, is available on <a href="https://itunes.apple.com/au/podcast/the-good-life-andrew-leigh-in-conversation/id1147502226?mt=2">Apple Podcasts</a> or wherever you stream your podcasts.</em></p>
<p><em><a href="https://itunes.apple.com/au/podcast/speaking-with.../id934267338">Subscribe</a> to The Conversation’s Speaking With podcasts on Apple Podcasts, or <a href="http://tunein.com/radio/Speaking-with---The-Conversation-Podcast-p671452/">follow</a> on Tunein Radio.</em></p>
<p><strong>Music</strong></p>
<ul>
<li><a href="http://freemusicarchive.org/music/Blue_Dot_Sessions/The_Contessa/Wisteria">Free Music Archive: Blue Dot Sessions - Wisteria</a></li>
</ul><img src="https://counter.theconversation.com/content/93282/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Fiona Fidler receives funding from the Australian Research Council and IARPA.</span></em></p>Economist, author and MP Andrew Leigh spoke to Fiona Fidler about how we should be using randomised trials more to drive decisions and policy in public life.Fiona Fidler, Associate Professor, School of Historical and Philosophical Studies, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/746182017-08-18T00:19:27Z2017-08-18T00:19:27ZControlled experiments won’t tell us which Indigenous health programs are working<p>Described as “one of the simplest, most powerful and revolutionary tools of research”, the randomised controlled trial (<a href="http://au.wiley.com/WileyCDA/WileyTitle/productCd-1405132663.html">RCT</a>) has yielded a great deal of important information in the health sciences. It is usually held up as the “gold standard” for gathering medical evidence.</p>
<p>The RCT can tell us which procedure or treatment is more effective under tightly controlled situations. This evidence is useful and important, but we also need to know things like what people want from health services, which treatments are preferred, and why some people stick to treatment regimes and some people don’t. </p>
<p>These issues are particularly relevant to remote Australia and Aboriginal and Torres Strait Islander health, where <a href="http://closingthegap.pmc.gov.au/">high levels</a> of illness and early death persist, and where what applies to the tightly controlled conditions of a laboratory rarely translates.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-are-aboriginal-children-still-dying-from-rheumatic-heart-disease-63814">Why are Aboriginal children still dying from rheumatic heart disease?</a>
</strong>
</em>
</p>
<hr>
<p>The <a href="https://ministers.dpmc.gov.au/scullion/2017/10m-year-strengthen-ias-evaluation">government is rolling out</a> its A$40 million plan to evaluate Indigenous health programs. The Evidence and Evaluation Framework aims to strengthen reporting, monitoring and evaluation for programs and services provided to Indigenous Australians.</p>
<p>As Indigenous Affairs Minister Nigel Scullion <a href="http://www.abc.net.au/news/2016-11-18/independent-evidence-proves-aboriginal-run-services-can-work/8036690">said last year</a>:</p>
<blockquote>
<p>When you don’t know anything about any of the programs, then you’re just relying on gut feelings, and that’s not good enough.</p>
</blockquote>
<p>So, the framework will provide information about where government money is being spent, what works and why. However, from a Western biomedical perspective, the randomised controlled trial is afforded an elevated position in establishing what works and why. While <a href="https://www.cis.org.au/app/uploads/2017/06/rr28.pdf">some recommend using RCTs</a> to evaluate Indigenous programs, it is critical to keep in mind why this form of evidence-gathering is not always appropriate in this context.</p>
<h2>Randomised controlled trials aren’t real life</h2>
<p>In health and medical research, the RCT involves <a href="http://catalogue.nla.gov.au/Record/6895093">randomly assigning</a> people to different groups and giving the groups different treatments. The random allocation to groups precludes there being systematic differences between participants at the start of the study. </p>
<p>At the end of the study, any differences between the groups can be attributed to the treatment and not some other factor. RCTs, therefore, are an elegant and efficient way of ruling out competing explanations for an observed effect. </p>
<p>However, research participants and scenarios in randomised controlled trials are often unlike the patients and settings to which the evidence will ultimately be applied. For example, RCTs have demonstrated that psychological treatments delivered through the internet <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3636304/">can be effective</a> for a wide range of disorders. But in real-world settings, <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3636304/">adherence rates to internet treatments</a> are very low, so the RCT result has little practical meaning. </p>
<p>The issue of which particular outcome should take priority can also be difficult to resolve through the RCT approach to research. Most RCTs prioritise the clinical perspective, such as a measurable change in a particular health outcome. However, there can be a <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1971011/">mismatch</a> between what doctors view as success and what patients and their loved ones perceive as a positive outcome following drug or other forms of treatment. </p>
<p>For example, it is known anecdotally in Alice Springs that some Aboriginal Australians who could benefit from kidney dialysis treatment prefer, instead, to go back to their community to be on country. While this can be detrimental to their physical health, it has important cultural significance for them. </p>
<p>The RCT approach in this situation would undoubtedly demonstrate the health benefits of kidney dialysis. But understanding this problem in the context of real lives requires different methodologies. Unless we design research programs to consider why people would rather stay on country than receive effective health treatments, Aboriginal health may not improve. </p>
<h2>How best to gather evidence</h2>
<p>Valuable work can be conducted by health professionals and service providers collecting data during their regular daily activities. The model of the <a href="http://journals.sagepub.com/doi/abs/10.1177/0002764206297585">“scientist-practitioner”</a> often observed in clinical psychology could be applied to great effect in remote Australia. </p>
<p>This model promotes a seamless transition between science and practice in which the individual is both researcher and clinician. Scientist-practitioners adopt a critical stance to their clinical practice and routinely demonstrate, through evaluation, the value of the service they are providing. </p>
<p>Such a model was used in a GP practice in rural Scotland. Here, they found one simple change in how appointments were scheduled almost doubled the number of patients (in a six-month period) able to access a psychology service within a reasonable time after referral from their GP.</p>
<p>Rather than clinicians advising patients when to attend the next appointment, systems were organised so patients booked appointments in the same way they would to see a GP. The changes were quantified by clinician-researchers who <a href="https://www.cambridge.org/core/journals/the-cognitive-behaviour-therapist/article/div-classtitlewhen-is-enough-enough-structuring-the-organization-of-treatment-to-maximize-patient-choice-and-controldiv/11A23830CD408A60ED63C9EE995EA134">collected these data</a> in the course of their routine clinical practice.</p>
<p>After this change, patients were able to access the service within two weeks of being referred, rather than waiting for seven months as had been the case. Access to services is typically problematic in rural areas, so discovering a cost-effective means of improving access is an important outcome. </p>
<p>The results were so substantial and sudden that they were unequivocal. A large expensive RCT wasn’t necessary to demonstrate this simple change had made important improvements.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/aboriginal-maori-how-indigenous-health-suffers-on-both-sides-of-the-ditch-48238">Aboriginal – Māori: how Indigenous health suffers on both sides of the ditch</a>
</strong>
</em>
</p>
<hr>
<p>This sort of approach could easily be applied in remote Australian settings. An RCT is <a href="http://onlinelibrary.wiley.com/doi/10.1002/cpp.1942/abstract#">not the only way</a>, nor even the best way in all situations, to eliminate alternative reasons for the treatment outcomes obtained. Many important questions are ignored or refashioned inappropriately when only one methodology predominates.</p>
<p>Especially in the area of Indigenous health, the health and medical community must be guided by what patients want, not just by what health professionals <a href="http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(16)32573-9/abstract">know how to do</a>.</p><img src="https://counter.theconversation.com/content/74618/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Professor Tim Carey is the Director of the Centre for Remote Health, Flinders University, in Alice Springs. He is currently a CI on an ARC funded project investigating the impact of the 'fly in fly out' workforce in remote communities. Tim is a past Board Director and Vice President of the Australian Psychological Society and is currently Director of the Australian Rural Health Education Network. He is the 2017-18 Fulbright Northern Territory Senior Scholar.</span></em></p>Like all good health care, improving health in remote settings requires an evidence base. But forcing all research questions into the randomised controlled trial model is not the answer.Timothy A. Carey, Professor, Director of the Centre for Remote Health, Flinders UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/605962016-07-15T12:19:34Z2016-07-15T12:19:34ZBillions are spent on clinical research that gets ignored – here’s the answer<figure><img src="https://images.theconversation.com/files/130048/original/image-20160711-9295-1reuklr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Worth the effort?</span> <span class="attribution"><a class="source" href="http://www.shutterstock.com/cat.mhtml?lang=en&language=en&ref_site=photo&search_source=search_form&version=llv1&anyorall=all&safesearch=1&use_local_boost=1&autocomplete_id=&searchterm=health%20research&show_color_wheel=1&orient=&commercial_ok=&media_type=images&search_cat=&searchtermx=&photographer_name=&people_gender=&people_age=&people_ethnicity=&people_number=&color=&page=1&inline=341386256">Paradise Picture</a></span></figcaption></figure><p>Heart failure is a major killer, <a href="https://www.nice.org.uk/guidance/cg108/chapter/introduction">affecting</a> well over a million people in the UK alone. We now have over 20 years’ worth of evidence from clinical trials that show strong benefits for a package of treatment involving not only drugs and devices but also where patients stay, how they are cared for and how the different healthcare professionals work with one another. Yet in many cases, doctors <a href="http://www.ncbi.nlm.nih.gov/pubmed/21159794">are not</a> acting on the findings. </p>
<p>This is just one example of a major problem in healthcare across the world. Billions of pounds are spent each year researching clinical treatments, but a staggering <a href="http://www.thelancet.com/journals/lancet/article/PIIS0140673609603299/fulltext?rss=yes">85% of all research ends up</a> not being put into practice – much of it passed over for reasons that could be avoided. Even when research findings are taken up by clinicians and those in charge of health policy, <a href="http://www.ncbi.nlm.nih.gov/pubmed/22179294">the average delay</a> between publication and practice is 17 years. </p>
<p>The more medical conditions you consider, the more examples crop up. Research into a new care package for chronic kidney disease <a href="http://fampra.oxfordjournals.org/content/early/2016/06/12/fampra.cmw049.abstract">was shown</a> to be effective, for example, but it is not implemented by GPs because they are struggling to prioritise it over other conditions and competing demands.</p>
<p>Or take Bell’s palsy, a condition where muscle weaknesses cause a sufferer’s facial features to droop on one side. Many patients are not <a href="http://bmjopen.bmj.com/content/3/7/e003121.short">being given</a> the treatment shown in trials to be the most effective. In lung cancer of the non-small cells, meanwhile, a new radiotherapy treatment <a href="http://www.ncbi.nlm.nih.gov/pubmed/10577699">has been proven</a> to be a better cure than conventional radiotherapy. Yet it is not widely given because of doctors’ preferences and the practicalities of providing it in hospitals. </p>
<h2>Trials and context</h2>
<p>So what’s the problem? This gap between evidence and practice has produced a whole field of research in its own right called <a href="http://implementationscience.biomedcentral.com">implementation science</a> or knowledge transfer, which has identified various issues. Some trials <a href="https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-015-0917-5%20Pearce%20et%20al">are not</a> of high enough quality. This can be for any number of reasons including problems with the way participants are selected, conducting the wrong trials or conducting the right trials the wrong way. </p>
<p>Other trials are <a href="http://www.economist.com/news/science-and-technology/21659703-failure-publish-results-all-clinical-trials-skewing-medical">not published</a> because they did not produce a result in favour of the new treatment being tested. Initiatives such as the <a href="http://www.alltrials.net/">All Trials campaign</a> aim to get all trials registered and their results published so that we can see the full picture, not a distorted one.</p>
<p>Yet this won’t solve everything. This is because one of the biggest problems, which has perhaps not received enough attention in the past, is that research findings are frequently much less meaningful to clinicians and policymakers in the real world than they could be. </p>
<p>Trials <a href="http://trialsjournal.biomedcentral.com/articles/10.1186/1745-6215-13-95">don’t collect</a> sufficient information about the context in which they were conducted, or about how contextual factors affected the results. So outside the direct trial setting, the results can either be less useful or it <a href="http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/1471-2288-11-79">can be</a> hard to judge whether they will be useful or not.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=722&fit=crop&dpr=1 600w, https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=722&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=722&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=908&fit=crop&dpr=1 754w, https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=908&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/130049/original/image-20160711-9267-1qzmg3l.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=908&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Never simple.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/cat.mhtml?lang=en&language=en&ref_site=photo&search_source=search_form&version=llv1&anyorall=all&safesearch=1&use_local_boost=1&autocomplete_id=&search_tracking_id=fWFKwxQHilRDHgQfoDh61A&searchterm=medication&show_color_wheel=1&orient=&commercial_ok=&media_type=images&search_cat=&searchtermx=&photographer_name=&people_gender=&people_age=&people_ethnicity=&people_number=&color=&page=1&inline=392709652">Jaroon Magnuch</a></span>
</figcaption>
</figure>
<p>Even a seemingly simple switch from one pill to another can stumble because of things like its cost and availability, patient preferences, or beliefs among staff as to the benefits of the old drug. And when it comes to complex team-delivered treatments such as surgery or rehabilitation, the scope for context to matter increases enormously. </p>
<h2>The need to look closer</h2>
<p>Many <a href="http://trialsjournal.biomedcentral.com/articles/10.1186/1745-6215-14-15">specialists believe</a> the answer is to run separate studies alongside clinical trials that aim to understand their context, their processes and
all the relevant variables that come into play. These are expensive, though not prohibitively so, and <a href="http://www.trialforge.org/">work is going on</a> into how to make them cheaper. The UK Medical Research Council last year <a href="https://www.mrc.ac.uk/documents/pdf/mrc-phsrn-process-evaluation-guidance-final/">published guidance</a> on how such studies should be conducted. </p>
<p>One thing lacking from this guidance, however, was much explanation of how context should be explored in these studies. This is because we’ve yet to fully understand the problem. An <a href="http://bmjopen.bmj.com/content/5/12/e009993.abstract">overview of 70 reviews</a> looking at why GPs and other professionals in primary care don’t put research findings into practice recently concluded that future research needs to concentrate on how and why contextual factors play a part. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=393&fit=crop&dpr=1 600w, https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=393&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=393&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=494&fit=crop&dpr=1 754w, https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=494&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/130051/original/image-20160711-9292-1q0wptz.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=494&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">GP’s perspective still poorly understood.</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/cat.mhtml?lang=en&language=en&ref_site=photo&search_source=search_form&version=llv1&anyorall=all&safesearch=1&use_local_boost=1&autocomplete_id=&search_tracking_id=fWFKwxQHilRDHgQfoDh61A&searchterm=medication&show_color_wheel=1&orient=&commercial_ok=&media_type=images&search_cat=&searchtermx=&photographer_name=&people_gender=&people_age=&people_ethnicity=&people_number=&color=&page=1&inline=323840159">Nonwarit</a></span>
</figcaption>
</figure>
<p>There also appears to be another obstacle. There is growing pressure to prioritise funding for research that has the greatest impact on clinical care. Methodology research into the context problem doesn’t have an immediate impact on clinical care, which makes it harder to attract funding. It currently attracts only a small part of the <a href="http://www.hrcsonline.net/pages/uk-health-research-analysis-2014">overall budget</a> for healthcare research. </p>
<p>The paradox is that until we properly understand how context influences trials, their results will continue failing to achieve their potential impact on clinical care. In other words, 85% of research will continue to be wasted. When the alternative is that millions of people do not get the best treatment available, the only logical move is to make this a top priority.</p><img src="https://counter.theconversation.com/content/60596/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Aileen Grant has received funding from NHS Research Scotland, part of the Chief Scientist’s Office, The Tayside Centre for Academic Sciences, NHS Tayside and NHS Lothian. The views in this piece are entirely her own. </span></em></p><p class="fine-print"><em><span>Mary Wells has in the past received funding from Chief Scientist Office, Macmillan Cancer Support, Cancer Research UK, World Cancer Research Fund, Tenovus Scotland, Dundee Cancer Centre, University of Dundee, Tayside Oncology Fund, Big Lottery Fund, NHS Tayside and Molnlyke Healthcare. </span></em></p><p class="fine-print"><em><span>Shaun Treweek has received funding from the Chief Scientist's Office, European Union, National Institute for Health Research, Medical Research Council and the University of Aberdeen’s Development Trust.
</span></em></p>Some 85% of research into drugs and treatments ends up on the cutting room floor but not all of that should.Aileen Grant, Research Fellow, University of StirlingMary Wells, University of StirlingShaun Treweek, Professor of Health Services Research, University of AberdeenLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/450962015-07-23T17:34:46Z2015-07-23T17:34:46ZThe positive impact of deworming in Kenyan schools: the evidence untangled<figure><img src="https://images.theconversation.com/files/89519/original/image-20150723-22806-zzjob.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A health worker dispenses albendazole tablets to a child on National Deworming Day in Kisumu, Kenya. </span> <span class="attribution"><span class="source">Evidence Action, Courtesy of Photoshare</span></span></figcaption></figure><p>A <a href="http://ije.oxfordjournals.org/content/early/2015/07/21/ije.dyv128.abstract">re-analysis</a> of research on school-based deworming in Kenya <a href="http://ije.oxfordjournals.org/content/early/2015/07/21/ije.dyv129.short?rss=1">strongly supports</a> the finding that treatment improves school attendance of both children who are treated and those who are not. </p>
<p>The re-analysis confirms the core findings of our original <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0262.2004.00481.x/abstract">research</a>. These were that: </p>
<ul>
<li><p>Deworming programmes reduced school absenteeism in treatment schools by one-quarter,</p></li>
<li><p>It was cheaper than other ways of increasing school participation, and </p></li>
<li><p>Appeared to improve school attendance in schools where no children were treated. </p></li>
</ul>
<p>The re-analysis also corrects a few errors, some mislabelling and clarifies a few data issues in the original research. As authors of the original study, our <a href="http://ije.oxfordjournals.org/content/early/2015/07/21/ije.dyv129.short?rss=1">assessment</a> of the re-analysis is that it provides a welcome endorsement of the efficacy of school-based deworming interventions. We do, however, disagree with some of its findings.</p>
<p>Deworming is currently being implemented as policy in many parts of the <a href="http://wber.oxfordjournals.org/content/early/2015/06/03/wber.lhv008.abstract">developing world</a>. Recent estimates suggest that 280 million children, out of 870 million in need, are treated for worms, many via school-based and community-based programmes. This focus on deworming was triggered, in large part, by the findings of randomised controlled trials conducted by a team of economists, including myself, between 1998 and 2004.</p>
<p>That the core of these findings has been confirmed by epidemiologists underscores the importance of the approach adopted by policymakers like the World Health Organisation (WHO). It has recommended mass treatment once a year in regions where worm prevalence is 20% and twice a year in regions where worm it is 50%. Our recently published <a href="http://wber.oxfordjournals.org/content/early/2015/06/03/wber.lhv008.abstract">paper</a> suggests that the WHO recommendations are justified on human rights, welfare economics and cost-effectiveness grounds.</p>
<h2>Areas of agreement and disagreement</h2>
<p>The re-analysis consists of two papers. The first is a pure replication exercise. This is useful. The differences are relatively minor, and the bottom-line policy conclusions remain largely intact.</p>
<p>Policies have been developed based on three main findings in <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0262.2004.00481.x/abstract">our earlier research</a>. </p>
<ul>
<li><p>First, that school-based deworming in Kenya led to large drops in worm infections for those taking the tablets and for other community members by breaking the cycle of transmission.</p></li>
<li><p>Second, that school attendance rose sharply in the deworming treatment schools over the two years of the study. </p></li>
<li><p>Third, since deworming is cheap, it achieves these two previous goals very cost effectively. The study has figured into recent policy debates about the attractiveness of mass deworming in several countries, including Kenya.</p></li>
</ul>
<p>We differ with one aspect of the pure replication report. We disagree with the interpretation of the people who did not necessarily take the drugs themselves, but benefited even though they lived within 3kms to 6kms of treatment schools.</p>
<p>The updated estimates are too <a href="http://dx.doi.org/10.1093/ije/dyv128">“noisy”</a> to be useful. What we mean by this is that when the data is updated, the statistical estimates are not informative, and neither positive nor negative values can be ruled out within a wide range. Our interpretation is that it is not worth putting much weight on these estimates but the replication authors use these uninformative figures to cast doubt on all other results in the paper. This is inappropriate. </p>
<p>The second part of the re-analysis is a statistical replication which assesses the “robustness” of the 2004 findings. This is a common goal of researchers – figuring out how much conclusions change when assumptions are changed.</p>
<p>We identified more substantive problems with the statistical replication which contains a number of <a href="http://dx.doi.org/10.1093/ije/dyv128">analytical errors</a> and draws conclusions that are contradicted by the data.</p>
<p>Of particular concern are the questions raised about the robustness of the original research result that deworming has benefits for school attendance. To assess robustness in our original research, we considered statistical models, samples, approaches to weighting data, and definitions of deworming “treatment”. </p>
<p>Questions about the robustness of the result is based on two puzzling analytical choices neither of which can be justified with data. These are:</p>
<ul>
<li><p>Incorrectly defining the deworming treatment variable of when the children received their treatment. The re-analysis suggest ambiguity around treatment dates. But there was none. The original research paper, data and project documentation are all crystal clear. </p></li>
<li><p>The most glaring error is that they chose to split the data into separate years. In doing so they uncovered “unexpected” patterns in the data, regarding the correlation between deworming treatment in a school and the number of attendance observations collected in that school. They argue this pattern could “bias” the analysis. </p></li>
</ul>
<p>We directly tested whether these “unexpected” patterns existed in the data and conclude that they don’t. We continue to believe that in the absence of any justification for splitting the data set, the analysis of the Kenya deworming data should be carried out on the full one. </p>
<p>This leads to the robust conclusion that deworming improved school attendance in the rural schools that we studied. These results contribute to a growing research literature finding large positive long-run impacts of deworming on educational and labour market outcomes.</p><img src="https://counter.theconversation.com/content/45096/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Edward Miguel does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A re-analysis of research into deworming interventions at Kenyan schools has confirmed some findings and disputed others. However, it does not take away from the programme’s effectiveness.Edward Miguel, Oxfam Professor of Environmental and Resource Economics, University of California, BerkeleyLicensed as Creative Commons – attribution, no derivatives.