tag:theconversation.com,2011:/us/topics/causation-12244/articlesCausation – The Conversation2022-11-21T13:15:49Ztag:theconversation.com,2011:article/1947932022-11-21T13:15:49Z2022-11-21T13:15:49ZPeople don’t mate randomly – but the flawed assumption that they do is an essential part of many studies linking genes to diseases and traits<figure><img src="https://images.theconversation.com/files/496010/original/file-20221117-25-slwoe3.jpg?ixlib=rb-1.1.0&rect=110%2C96%2C4690%2C2134&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Statistical pitfalls in GWAS can result in misleading conclusions about whether some traits (like long horns or spotted skin, in the case of dinosaurs) are genetically linked.</span> <span class="attribution"><span class="source">@meanymoo</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>The idea that <a href="https://doi.org/10.1002/0471667196.ess0209.pub2">correlation does not imply causation</a> is a fundamental caveat in epidemiological research. A classic example involves a hypothetical link between ice cream sales and drownings – instead of increased ice cream consumption causing more people to drown, it’s plausible that a third variable, summer weather, is driving up an appetite for ice cream and swimming, and hence opportunities to drown.</p>
<p>But what about correlations involving genes? How can researchers be sure that a particular trait or disease is truly genetically linked, and not caused by something else?</p>
<p>We are <a href="https://www.richardborder.com">statistical</a> <a href="https://scholar.google.com/citations?user=SPXgieEAAAAJ&hl=en">geneticists</a> who study the genetic and nongenetic factors that influence human variation. In our <a href="https://www.science.org/doi/10.1126/science.abo2059">recently published research</a>, we found that the genetic links between traits found in many studies might not be connected by genes at all. Instead, many are a result of how humans mate.</p>
<h2>Genome-wide association studies try to link genes to traits</h2>
<p>Because the genes you inherit from your parents remain unchanged throughout your life, with rare exception, it makes sense to assume that there is a causal relationship between certain traits you have and your genetics.</p>
<p>This logic is the basis for <a href="https://www.genome.gov/about-genomics/fact-sheets/Genome-Wide-Association-Studies-Fact-Sheet">genome-wide association studies, or GWAS</a>. These studies collect DNA from many people to identify positions in the genome that might be correlated with a trait of interest. For example, if you have certain forms of the <a href="https://www.cancer.gov/about-cancer/causes-prevention/genetics/brca-fact-sheet"><em>BRCA1</em> and <em>BRCA2</em> genes</a>, you may have an increased risk for certain types of cancer.</p>
<p>Similarly, there may be gene variants that play a role in whether or not someone has schizophrenia. The hope is to learn something about the complex mechanisms that link variation at the molecular level to individual differences. With a clearer understanding of the genetic basis of different traits, scientists would be better able to determine risk factors for related diseases. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/sOP8WacfBM8?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">GWAS studies seek to find genetic associations between individual traits.</span></figcaption>
</figure>
<p>Researchers have run <a href="https://doi.org/10.1093/nar/gky1120">thousands of GWAS to date</a>, identifying genetic variants associated with myriad diseases and disease-related traits. In many instances, researchers have identified genetic variants that affect more than one trait. This form of biological overlap, in which the same genes are thought to influence several apparently unrelated traits, is known as <a href="https://doi.org/10.1186/s13073-016-0332-x">pleiotropy</a>. For example, certain variants of the <a href="https://medlineplus.gov/genetics/gene/pah"><em>PAH</em> gene</a> can have <a href="https://medlineplus.gov/genetics/condition/phenylketonuria/">several distinct effects</a>, including altering skin pigmentation and causing seizures.</p>
<p>One way scientists assess pleiotropy is through <a href="https://doi.org/10.1038/ng.3604">genetic correlation analysis</a>. Here, geneticists investigate whether the genes associated with a given trait are associated with other traits or diseases by statistically analyzing large samples of genetic data. Over the past decade, genetic correlation analysis has become the primary method for assessing potential pleiotropy across fields as diverse as <a href="https://doi.org/10.1038/ng.3406">internal medicine</a>, <a href="https://www.thessgac.org">social science</a> and <a href="https://doi.org/10.1017/s0033291717002318">psychiatry</a>. </p>
<p>Scientists use the findings from genetic correlation analyses to figure out the potential shared causes of these traits. For instance, if <a href="https://doi.org/10.1126/science.aap8757">genes associated with bipolar disorders</a> also predict anxiety disorders, perhaps the two conditions may partially involve some of the same neural circuits or respond to similar treatments.</p>
<h2>Assortative mating and genetic correlation</h2>
<p>However, just because a gene is correlated with two or more traits doesn’t necessarily mean it causes them.</p>
<p>Virtually all the statistical methods researchers commonly use to assess genetic correlations <a href="https://doi.org/10.1046/j.1439-0388.2002.00356.x">assume that mating is random</a>. That is, they assume that potential mating partners decide who they will have children with based on a roll of the dice. In reality, many factors likely influence who mates with whom. The simplest example of this is geography – people living in different parts of the world are less likely to end up together than people living nearby.</p>
<p>We wanted to find out how much the assumption of random mating affects the accuracy of genetic correlation analyses. In particular, we focused on the potential confounding effects of <a href="https://doi.org/10.1038/s41562-018-0476-3">assortative mating</a>, or how people tend to mate with those who share similar characteristics with them. Assortative mating is a widely documented phenomenon seen across a broad array of traits, interests, measures and social factors, including <a href="https://doi.org/10.1002/ajhb.22917">height</a>, <a href="https://doi.org/10.2307/2095670">education</a> and <a href="https://doi.org/10.1016/j.biopsych.2019.06.025">psychiatric conditions</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/bK85aZPR3UY?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Humans do not mate randomly – rather, people tend to gravitate toward certain traits.</span></figcaption>
</figure>
<p>In <a href="https://doi.org/10.1126/science.abo2059">our study</a> we examined cross-trait assortative mating, whereby people with one trait (for example, being tall) tend to mate with people with a completely different trait (for example, being wealthy). From our database of 413,980 mate pairs in the U.K. and Denmark, we found evidence of cross-trait assortative mating for many traits – for instance, an individual’s time spent in formal schooling was correlated not only with their mate’s educational attainment, but also with many other characteristics, including height, smoking behaviors and risk for different diseases.</p>
<p>We found that taking into consideration the similarities across mates could strongly predict which traits would be considered genetically linked. In other words, just based on how many characteristics a pair of mates shared, we could identify around 75% of the presumed genetic links between these traits – all without sampling any DNA.</p>
<h2>Genetic correlation does not imply causation</h2>
<p>Cross-trait assortative mating shapes the genome. If people with one heritable trait tend to mate with people with another heritable trait, then these two distinct characteristics will become genetically correlated to each other in subsequent generations. This will happen regardless of whether or not these traits are truly genetically linked to each other.</p>
<p>Cross-trait assortative mating means that the genes you inherit from one parent will be correlated with those you inherit from the other. How people mate is not random, violating the key assumption behind genetic correlation analyses. This inflates the genetic association between traits that aren’t truly linked together by genes.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Illustration of dinosaurs with and without long horns or spiked backs." src="https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=424&fit=crop&dpr=1 600w, https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=424&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=424&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=533&fit=crop&dpr=1 754w, https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=533&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/495756/original/file-20221116-21-hyom6p.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=533&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">If dinosaurs with long horns preferentially mate with dinosaurs with spiked backs, genes for both of these traits can become associated with each other in subsequent generations even though the same gene doesn’t code for them.</span>
<span class="attribution"><span class="source">Aaqilah M</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>Recent studies corroborate our findings. Earlier this year, researchers computed genetic correlations using a method that examines the association between the <a href="https://doi.org/10.1038/s41588-022-01062-7">traits and genes of siblings</a>. The genetic links between traits influenced by cross-trait assortative mating were substantially weakened.</p>
<p>But without accounting for cross-trait assortative mating, using genetic correlation estimates to study the biological pathways causing disease can be misleading. Genes that affect only one trait will appear to influence multiple different conditions. For example, a genetic test designed to assess the risk for one disease may incorrectly detect vulnerability for a broad number of unrelated conditions.</p>
<p>The ability to measure variation across individuals at the genetic and molecular level is truly a feat of modern science. However, genetic epidemiology is still an observational enterprise, subject to the same caveats and challenges facing other forms of nonexperimental research. Though our findings don’t discount all genetic epidemiology research, understanding what genetic studies are truly measuring will be essential to translate research findings into new ways to treat and assess disease.</p><img src="https://counter.theconversation.com/content/194793/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Border receives funding from the National Institutes of Health.</span></em></p><p class="fine-print"><em><span>Noah Zaitlen receives funding from the NIH, NSF, DoD, and CZI. </span></em></p>People don’t randomly select who they have children with. And that means an underlying assumption in research that tries to link particular genes to certain diseases or traits is wrong.Richard Border, Postdoctoral Researcher in Statistical Genetics, University of California, Los AngelesNoah Zaitlen, Professor of Neurology and Human Genetics, University of California, Los AngelesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1761522022-02-16T00:45:37Z2022-02-16T00:45:37ZA recent study suggests red wine may protect you from COVID. But I wouldn’t drink to this yet<figure><img src="https://images.theconversation.com/files/446395/original/file-20220214-19-v8bky9.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C4914%2C3273&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>A <a href="https://www.frontiersin.org/articles/10.3389/fnut.2021.772700/full">study</a> published last month in the journal Frontiers in Nutrition made <a href="https://www.thedrinksbusiness.com/2022/01/drinking-wine-could-lower-risk-of-covid-infection-study-finds/">headlines</a> around the world.</p>
<p>Among a number of findings concerning alcoholic drinks and COVID, it reported drinking red wine was associated with a reduction in the risk of contracting COVID.</p>
<p>Before you start inviting people over to celebrate, it’s important to be aware there are a number of reasons to be cautious about these findings.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1485745746309697539"}"></div></p>
<p>This paper is a great example of why many studies addressing diet and health are unreliable and need to be interpreted carefully.</p>
<p>Limitations in the way many of these studies are conducted is the reason we’re often told a food is good for us one day, only for this to be contradicted in another study.</p>
<p>This whiplash in study findings is a source of continued frustration in the field of nutritional science.</p>
<p>Let’s explore some of the the reasons why these studies can be misleading.</p>
<h2>What were some of the findings?</h2>
<p>There were a number of findings reported in <a href="https://www.frontiersin.org/articles/10.3389/fnut.2021.772700/full">this study</a>. </p>
<p>Probably the most captivating from a media perspective was that drinking between one to four glasses of red wine a week was associated with an approximate 10% reduction in the risk of getting COVID.</p>
<p>Drinking five or more glasses of red wine a week was associated with a reduction in risk of 17%.</p>
<p>Although drinking white wine and champagne also appeared protective, the effect was smaller than with red wine.</p>
<p>In contrast, drinking beer was associated with a 7–28% increased risk of getting COVID.</p>
<p>It was hard to identify clear patterns with some of the other findings. For example, while drinking spirits was associated with an increased risk of contracting COVID, drinking fortified wine, in small doses only, appeared protective.</p>
<p>Similarly, while drinking alcohol more frequently was associated with a lower risk of getting COVID, drinking more than the UK guidelines for alcohol consumption was associated with an increased risk of contracting COVID.</p>
<p>Let’s delve deeper into the findings concerning red wine to explore some of the reasons why one should be sceptical about the results of these sorts of studies.</p>
<h2>Correlation doesn’t equal causation</h2>
<p>The first and most obvious reason to be cautious when interpreting this study is <a href="https://www.abs.gov.au/websitedbs/D3310114.nsf/home/statistical+language+-+correlation+and+causation">correlation doesn’t equal causation</a>.</p>
<p>You hear this phrase all the time, but that’s because it’s <em>so</em> important to make the distinction between two variables being simply linked with each other, and one causing the other.</p>
<p>This analysis was completed from data collected from a large longitudinal study, which is a study in which you recruit participants and track them over time to collect information about their behaviours and health. Although this study, the <a href="https://www.ukbiobank.ac.uk/">UK Biobank cohort</a>, had an impressive number of participants, the analysis simply involved looking for associations between alcohol consumption patterns and the diagnosis of COVID.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1182405831360253952"}"></div></p>
<p>As this was an observational study where data was collected and analysed from people living their lives normally, all one can say with confidence is drinking red wine was associated with a lower likelihood of having been diagnosed with COVID. One can’t say drinking red wine was actually the reason the risk of contracting disease in this group was lower.</p>
<p>It’s entirely possible this association reflected other differences between red wine drinkers and those who developed COVID. This phenomenon is called “confounding”, and it’s very hard to completely remove the effect of confounding in observational studies to tease out what’s really going on.</p>
<p>Although the researchers made attempts to statistically adjust the results to account for some obvious confounders in this study – such as age, sex and education level – this type of adjustment isn’t perfect.</p>
<p>There’s also no guarantee there weren’t other sources of confounding in the study that weren’t considered. </p>
<h2>Data on alcohol drinking is unreliable</h2>
<p>There are two major limitations in the data collected relating to alcohol drinking patterns in this study. </p>
<p>The first is collecting information on what people eat and drink is <a href="https://www.nature.com/articles/522259f">notoriously unreliable</a>.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1104834811280220161"}"></div></p>
<p>And even more of a problem is the extent of this misreporting tends to vary considerably between people, making it very difficult to correct for.</p>
<p>The second major limitation was the researchers collected data on alcohol drinking patterns at the beginning of this longitudinal study and extrapolated forward many years to complete the analysis. That is, researchers looked at drinking patterns at the start of the study and assumed people had the same drinking patterns for the whole study.</p>
<p>Clearly a person’s drinking patterns could change considerably over the years, so this also introduces a great deal of potential error.</p>
<h2>The public health significance is questionable</h2>
<p>Another reason to temper your response to these findings is even if we assume red wine reduces the risk of COVID infection, the key question we need to ask is whether a 10–17% reduction in this risk compared with non-drinkers is of any real-world significance.</p>
<p>That is, how does this finding impact on our response to COVID?</p>
<p>Considering the huge benefit one can gain from other measures such as wearing masks, social distancing, improved hand hygiene, and getting vaccinated, this reduction in risk (if real) is marginal, and doesn’t translate to any significant protection from COVID.</p>
<p>The reality is drinking red wine solely to reduce your risk of contracting COVID isn’t something that can be recommended on the basis of this study – especially considering the other potentially detrimental effects of drinking alcohol.</p>
<h2>Putting it together</h2>
<p>Observational studies addressing aspects of our diet and health come with numerous and significant challenges.</p>
<p>They’re highly susceptible to the presence of confounders and biases, which limit their reliability and make the interpretation of their findings fraught.</p>
<p>So it’s really important the results from these types of studies are interpreted with a great deal of caution.</p>
<p>Therefore, the message when it comes to drinking alcohol remains you shouldn’t drink because of any perceived health benefits relating to COVID or any other illness. You should drink moderately if it brings you pleasure, and be clear this is why you are drinking.</p>
<p>While this isn’t the news any of us wanted to hear, this shouldn’t be a surprise, because if something sounds too good to be true, it usually is.</p><img src="https://counter.theconversation.com/content/176152/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Hassan Vally does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How can something be bad for you one day, and good for you the next? This study highlights the problem of correlation and causation.Hassan Vally, Associate Professor, Deakin UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1595242021-04-22T20:12:42Z2021-04-22T20:12:42ZVital Signs: the pros and cons of diversity in organisations<figure><img src="https://images.theconversation.com/files/396466/original/file-20210422-15-1mr0wkl.jpg?ixlib=rb-1.1.0&rect=54%2C380%2C6048%2C3277&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Rawpixel.com/Shutterstock</span></span></figcaption></figure><p>Breaking down the old boys’ club in business, government and other organisations is intrinsically important. Ensuring greater diversity in organisations – on gender, racial, ethnic and other lines – is, simply put, the right thing to do.</p>
<p>But some advocates of greater diversity make an extra claim: that it improves the quality of decisions, and hence an organisation’s performance. Do the right thing <em>and</em> increase profits or effectiveness. What’s not to like?</p>
<p>Robust empirical evidence to support this claim – that more diverse organisations perform better – is tricky to provide. One can look at more diverse organisations and compare them to less diverse ones. Suppose that more diverse organisations do, in fact, perform better. What does one conclude?</p>
<p>Well, what one should definitely not conclude is that greater diversity <em>causes</em> better performance. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/religion-race-and-nationality-what-are-our-prejudices-and-how-can-we-overcome-them-150306">Religion, race and nationality – what are our prejudices and how can we overcome them?</a>
</strong>
</em>
</p>
<hr>
<h2>Correlation doesn’t prove causation</h2>
<p>Those things may be correlated. But that could easily be because higher quality organisations want to or can afford to be more diverse. Or it could be some other factor correlated with diversity is the true driver of superior performance.</p>
<p>Economists call these “endogeneity problems” – challenges to interpreting a mere correlation between two variables (A and B, say) as evidence that A causes B.</p>
<p>Yet the causal effect of diversity on the performance of organisations is a deeply important question. Ideally one would like to run an experiment where diversity within teams in an organisation is randomly assigned.</p>
<p>Just as pharmaceutical trials randomly assign some patients medication and others a placebo, economists in recent decades have performed field experiments to measure the impacts of all manner of interventions.</p>
<p>The quintessential example of this paradigm are the experiments that led to development economists Abhijit Banerjee, Esther Duflo and Michael Kremer winning the <a href="https://www.nobelprize.org/prizes/economic-sciences/2019/summary/">2019 Nobel Prize in economic sciences</a>. </p>
<p>A pharmaceutical trial of, say, heart medication can determine its causal effect by looking at the average number of cardiac events in the group taking the medication compared with the control group (those on the placebo). Field experiments in economics can determine the causal effect of all manner of social and economic interventions.</p>
<p>That is why a research paper by three economists – Benjamin Marx, Vincent Pons and Tavneet Suri – <a href="https://ssrn.com/abstract=3824531">released this month</a> is both interesting and important. It is about just such a field experiment centred on diversity and team performance.</p>
<h2>A diversity experiment</h2>
<p>Their <a href="https://www.nber.org/system/files/working_papers/w28655/w28655.pdf?utm_campaign=PANTHEON_STRIPPED&%3Butm_medium=PANTHEON_STRIPPED&%3Butm_source=PANTHEON_STRIPPED">experiment</a> involved people working as canvassers for a non-profit organisation in Kenya. The work involved going door to door to promote voter registration. Workers were randomly assigned a teammate, a supervisor and a bunch of people to canvass. </p>
<p>Diversity within the teams was along ethnic lines. This led to:</p>
<blockquote>
<p>“random variation within teams in the degree of horizontal diversity (between teammates), vertical diversity (between teammates and their supervisor) and external diversity (between teams and the individuals they canvassed)”.</p>
</blockquote>
<p>Measuring team-level performance, the authors conclude that “horizontal ethnic diversity decreases performance, while vertical diversity often improves performance, and external diversity has no effect”. </p>
<p>Specifically, teams that were ethnically homogeneous were 20% more efficient in completing their visits than diverse teams. But teams with a manager of the same ethnicity as one of the teammates were about 7.5% less efficient.</p>
<h2>Horizontal versus vertical</h2>
<p>There may be a potential trade-off between different horizontal and vertical effects of diversity in organisations.</p>
<p>Diversity within teams might increase “communication costs” due to lack of shared experience or common understanding of how to perform tasks together. Or it might be that people prefer working with people most similar to themselves.</p>
<p>On the other hand, homogeneity throughout an organisation hierarchy may well lead to managers favouring subordinates they more easily relate to.</p>
<p>This is a simple theory, but the authors’ experiment bears it out. Vertical diversity increases performance, perhaps by reducing favouritism. Horizontal diversity decreases performance, perhaps by increasing communication costs.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/reality-check-more-women-on-boards-doesnt-guarantee-diversity-103526">Reality check: more women on boards doesn't guarantee diversity</a>
</strong>
</em>
</p>
<hr>
<h2>Not all benefits are automatic</h2>
<p>As with all experiments, how well the results translate to other contexts is an open question – what is known as “external validity”. </p>
<p>It is possible the results apply only to ethnic diversity among non-profit organisations doing voter registration in Kenya.</p>
<p>Or perhaps there are broader lessons. One might be that vertical diversity is particularly important for breaking down inefficient favouritism. This might be as true in Australia or Japan as in Kenya.</p>
<p>But to know for sure we’d need to see a randomised controlled trial in those exact environments.</p>
<p>The other lesson is that perhaps the downsides of horizontal diversity might be mitigated or overcome through improving training or communication protocols. It might be the “diversity cost” goes away as people get to know each other better.</p>
<p>Diversity is inherently important. Creating more diverse organisation across society is the right thing to do. It can also lead organisations to perform better. </p>
<p>But the latter isn’t automatic. It depends on how the organisation is structured and managed.</p><img src="https://counter.theconversation.com/content/159524/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Holden is President-elect of the Academy of the Social Sciences in Australia.</span></em></p>Diversity can lead organisations to perform better. But new research shows that isn’t automatic.Richard Holden, Professor of Economics, UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1537082021-03-15T12:56:48Z2021-03-15T12:56:48Z6 tips to help you detect fake science news<figure><img src="https://images.theconversation.com/files/389103/original/file-20210311-20-90hym5.jpg?ixlib=rb-1.1.0&rect=781%2C889%2C4508%2C3098&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If what you're reading seems too good to be true, it just might be.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/dhCGbPx8wpk">Mark Hang Fung So/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>I’m a professor of chemistry, have a Ph.D. and <a href="https://scholar.google.com/citations?user=RpiSPiwAAAAJ&hl=en&oi=ao">conduct my own scientific research</a>, yet when consuming media, even I frequently need to ask myself: “Is this science or is it fiction?”</p>
<p>There are plenty of reasons a science story might not be sound. Quacks and charlatans take advantage of the complexity of science, some content providers can’t tell bad science from good and some politicians peddle fake science to support their positions.</p>
<p>If the science sounds too good to be true or too wacky to be real, or very conveniently supports a contentious cause, then you might want to check its veracity.</p>
<p>Here are six tips to help you detect fake science.</p>
<h2>Tip 1: Seek the peer review seal of approval</h2>
<p>Scientists rely on journal papers to share their scientific results. They let the world see what research has been done, and how.</p>
<p>Once researchers are confident of their results, they write up a manuscript and send it to a journal. Editors forward the submitted manuscripts to at least two external referees who have expertise in the topic. These reviewers can suggest the manuscript be rejected, published as is, or sent back to the scientists for more experiments. That process is called “peer review.”</p>
<p>Research published in <a href="https://undsci.berkeley.edu/article/howscienceworks_16">peer-reviewed journals</a> has undergone rigorous quality control by experts. Each year, about <a href="https://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf">2,800 peer-reviewed journals</a> publish roughly 1.8 million scientific papers. The body of scientific knowledge is constantly evolving and updating, but you can trust that the science these journals describe is sound. Retraction policies help correct the record if mistakes are discovered post-publication.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="man in white coat in lab at laptop" src="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389321/original/file-20210312-15-1iumcql.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">‘Peer-reviewed’ means other scientific experts have checked the study over for any problems before publication.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/scientist-using-computer-in-laboratory-royalty-free-image/1194829395">ljubaphoto/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Peer review takes months. To get the word out faster, scientists sometimes post research papers on what’s called a preprint server. These often have “RXiv” – pronounced “archive” – in their name: MedRXiv, BioRXiv and so on. These articles have not been peer-reviewed and so are <a href="https://doi.org/10.1080/10410236.2020.1864892">not validated by other scientists</a>. Preprints provide an opportunity for other scientists to evaluate and use the research as building blocks in their own work sooner.</p>
<p>How long has this work been on the preprint server? If it’s been months and it hasn’t yet been published in the peer-reviewed literature, be very skeptical. Are the scientists who submitted the preprint from a reputable institution? During the COVID-19 crisis, with researchers scrambling to understand a dangerous new virus and rushing to develop lifesaving treatments, preprint servers have been littered with immature and unproven science. <a href="https://arstechnica.com/science/2020/05/a-lot-of-covid-19-papers-havent-been-peer-reviewed-reader-beware/">Fastidious research standards have been sacrificed for speed</a>.</p>
<p>A last warning: Be on the alert for research published in what are called <a href="https://www.nature.com/articles/d41586-019-03759-y">predatory journals</a>. They don’t peer-review manuscripts, and they charge authors a fee to publish. Papers from any of the <a href="https://guides.library.yale.edu/c.php?g=296124&p=1973764">thousands of known predatory journals</a> should be treated with strong skepticism.</p>
<h2>Tip 2: Look for your own blind spots</h2>
<p>Beware of biases in your own thinking that might predispose you to fall for a particular piece of fake science news.</p>
<p>People give their own memories and experiences more credence than they deserve, making it hard to accept new ideas and theories. Psychologists call this quirk the availability bias. It’s a useful built-in shortcut when you need to make quick decisions and don’t have time to critically analyze lots of data, but it messes with your fact-checking skills.</p>
<p>In the fight for attention, sensational statements beat out unexciting, but more probable, facts. The tendency to overestimate the likelihood of vivid occurrences is called the salience bias. It leads people to mistakenly believe overhyped findings and trust confident politicians in place of cautious scientists.</p>
<p>A confirmation bias can be at work as well. People tend to give credence to news that fits their existing beliefs. This tendency helps climate change denialists and anti-vaccine advocates believe in their causes in spite of the scientific consensus against them.</p>
<p>Purveyors of fake news know the weaknesses of human minds and try to take advantage of these natural biases. <a href="https://www.huffpost.com/entry/how-to-overcome-cognitive-bias-and-use-it-to-your-advantage_b_5900fff3e4b00acb75f1844f">Training can help you</a> <a href="https://hbr.org/2015/05/outsmart-your-own-biases">recognize and overcome</a> your own cognitive biases.</p>
<h2>Tip 3: Correlation is not causation</h2>
<p>Just because you can see a relationship between two things doesn’t necessarily mean that one causes the other.</p>
<p>Even if surveys find that people who live longer drink more red wine, it doesn’t mean a daily glug will extend your life span. It could just be that red-wine drinkers are wealthier and have better health care, for instance. Look out for this error in nutrition news.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="gloved hand holds a mouse" src="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/389322/original/file-20210312-20-1s3fgpp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">What works well in rodents might not work at all in you.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/face-of-tiny-white-mouse-peeps-out-royalty-free-image/157440932">sidsnapper/E+ via Getty Images</a></span>
</figcaption>
</figure>
<h2>Tip 4: Who were the study’s subjects?</h2>
<p>If a study used human subjects, check to see whether it was placebo-controlled. That means some participants are randomly assigned to get the treatment – like a new vaccine – and others get a fake version that they believe is real, the placebo. That way researchers can tell whether any effect they see is from the drug being tested. </p>
<p>The best trials are also double blind: To remove any bias or preconceived ideas, neither the researchers nor the volunteers know who is getting the active medication or the placebo.</p>
<p>The size of the trial is important too. When more patients are enrolled, researchers can identify safety issues and beneficial effects sooner, and any differences between subgroups are more obvious. Clinical trials can have thousands of subjects, but some scientific studies involving people are much smaller; they should address how they’ve achieved the statistical confidence they claim to have.</p>
<p>Check that any health research was actually done on people. Just because a certain drug works <a href="https://twitter.com/justsaysinmice">in rats or mice</a> does not mean it will work for you.</p>
<h2>Tip 5: Science doesn’t need ‘sides’</h2>
<p>Although a political debate requires two opposing sides, a scientific consensus does not. When the media interpret objectivity to mean equal time, it undermines science. </p>
<h2>Tip 6: Clear, honest reporting might not be the goal</h2>
<p>To get their audience’s attention, morning shows and talk shows need something exciting and new; accuracy may be less of a priority. Many science journalists are doing their best to accurately cover new research and discoveries, but plenty of science media are better classified as entertaining rather than educational. <a href="https://www.bmj.com/content/349/bmj.g7346">Dr. Oz</a>, Dr. Phil and Dr. Drew should not be your go-to medical sources. </p>
<p>Beware of medical products and procedures that sound too good to be true. Be skeptical of testimonials. Think about the key players’ motivations and who stands to make a buck.</p>
<p>If you’re still suspicious of something in the media, make sure the news being reported reflects what the research actually found by <a href="https://www.sciencemag.org/careers/2016/03/how-seriously-read-scientific-paper">reading the journal article itself</a>.</p>
<p>[<em>Deep knowledge, daily.</em> <a href="https://theconversation.com/us/newsletters/the-daily-3?utm_source=TCUS&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=deepknowledge">Sign up for The Conversation’s newsletter</a>.]</p><img src="https://counter.theconversation.com/content/153708/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marc Zimmer does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Whenever you hear about a new bit of science news, these suggestions will help you assess whether it’s more fact or fiction.Marc Zimmer, Professor of Chemistry, Connecticut CollegeLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1250482019-10-10T21:06:43Z2019-10-10T21:06:43ZShould I eat red meat? Confusing studies diminish trust in nutrition science<figure><img src="https://images.theconversation.com/files/296315/original/file-20191009-3860-fojv6i.jpg?ixlib=rb-1.1.0&rect=31%2C83%2C3362%2C2124&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The new study still finds that reducing unprocessed red meat consumption by three servings in a week is associated with an an approximately eight per cent lower lifetime risk of heart disease, cancer and early death. </span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Another diet study, another controversy and the public is left wondering what to make of it. This time it’s a <a href="http://www.dssimon.com/MM/ACP-red-meat/">series of studies in the <em>Annals of Internal Medicine</em> </a> by an international group of researchers concluding people need not reduce their consumption of red and processed meat.</p>
<p>Over the past few years, study after study has indicated <a href="https://doi.org/10.1093/aje/kwt261">eating red and processed meat</a> is <a href="https://doi.org/10.1136/bmj.l2110">bad for your health</a> to the point where the <a href="https://www.who.int/features/qa/cancer-red-meat/en/">World Health Organization lists red meat as a probable carcinogen and processed meat as a carcinogen</a>. </p>
<figure>
<iframe src="https://player.vimeo.com/video/361970730" width="500" height="281" frameborder="0" webkitallowfullscreen="" mozallowfullscreen="" allowfullscreen=""></iframe>
<figcaption><span class="caption">Researchers explain the process and findings of their work examining the impact of eating meat.</span></figcaption>
</figure>
<p>This new study doesn’t dispute the finding of a possible increased risk for heart disease, cancer and early death from eating meat. However, the panel of international nutritional scientists concluded the risk was so small and the studies of too poor quality to justify any recommendation. </p>
<h2>So what does the new research actually say?</h2>
<p>The authors conducted a study of studies. This is done when findings of one or two pieces of research may not be definitive. Or the effect of something is so small you need to pool smaller studies into a larger one. From this, the authors found reducing unprocessed red meat consumption by three servings in a week was associated with an approximately eight per cent lower lifetime risk of heart disease, cancer and early death. </p>
<p>These findings are similar to many studies before it and aren’t surprising. However, this is a much smaller change in improved health than would be achieved by stopping smoking, eliminating hypertension or starting physical activity. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/yes-we-still-need-to-cut-down-on-red-and-processed-meat-124486">Yes, we still need to cut down on red and processed meat</a>
</strong>
</em>
</p>
<hr>
<p>Where the authors differed from previous studies was in how they assessed both the research and the benefit of reducing meat consumption to make their recommendations. They used a standard practice in medicine to <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2335261/">grade the quality of the studies</a> and found them to be poor. In addition, they interpreted the benefit of unprocessed red meat reduction (approximately eight per cent lower lifetime risk) to be small. They collectively recommended against the need for people to reduce meat consumption.</p>
<p>This sent <a href="https://www.theguardian.com/food/2019/sep/30/research-red-meat-poses-no-health-risk">nutrition and public health scientists into an uproar</a>, calling the study <a href="https://www.truehealthinitiative.org/news2019/true-health-initiative-respectfully-disagrees/">highly irresponsible</a> to public health and citing <a href="https://www.webmd.com/diet/news/20190930/controversial-studies-say-its-ok-to-eat-red-meat">grave concerns</a>.</p>
<h2>Studies identify association, not causation</h2>
<p>Nutritional science is messy. Most of our guidelines are based on observational studies in which scientists ask people what, and how much, they have eaten in a given time period (usually the previous year), and then follow them for years to see how many people get a disease or die.</p>
<p>A lot of times, diet is assessed only once, but we know people’s diets change over time. More robust studies ask people to report their diet multiple times. This can take into account changes. However, <a href="http://dx.doi.org/10.1136/jech.54.8.611">self-reported dietary data is known to be poor</a>. People may know what they ate, but have trouble knowing how much and even how it was prepared. All of which can affect the nutritional value of a food.</p>
<p>These studies also only identify associations, and not causation. This doesn’t mean causation isn’t possible, just the design of the study cannot show it. Usually, if a number of observational studies show similar results, our confidence of a causal effect increases. But in the end, this is still weak evidence.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/296317/original/file-20191009-3867-14dbe7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Processed meats are classified as a Class 1 carcinogen by the World Health Organization.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<h2>Sticking with diets is challenging</h2>
<p>The gold standard in medical science is the randomized controlled trial in which people are assigned by chance to various different groups, the most familiar being a new drug compared to placebo. Some say we shouldn’t use the same standard in nutrition because it’s hard to do. Sticking to diets is extremely challenging, which makes it hard to conduct a study long enough to see an effect on disease, not to mention the costs involved in doing so.</p>
<p>In addition, nutrition is complex. It’s not like smoking, where the goal is to not smoke at all. We need to eat to live. Therefore when we stop eating one thing, we likely replace it with another. What food we choose as the replacement can be just as important to our overall health as what food was stopped.</p>
<p>There are numerous instances when observational studies have shown a protective effect of a nutrient only to be disproven in randomized trials. Vitamins C, D and E, folic acid and beta carotene supplements were all believed to prevent disease in observational studies. These claims went unproven in randomized studies. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/296318/original/file-20191009-3935-107nli5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Carrots are a great source of beta carotene.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>In the case of beta carotene supplementation, for example, an <a href="https://www.nejm.org/doi/full/10.1056/NEJM199605023341802">increased risk for lung cancer</a> was found. By not holding nutrition sciences to the same bar as other medical sciences, we may be doing the public more harm than good.</p>
<h2>Weak evidence leads to bad guidelines</h2>
<p>From a public health perspective, a small individual change replicated throughout the population can lead to large changes at the societal level. This could result in changes in the average age of disease onset or death rates, which in turn could result in lower health-care costs. And for this reason, guidelines are needed, but if all we have is bad evidence, then we come up with bad guidelines.</p>
<p>Throughout the world, life expectancy has increased remarkably in recent centuries. While there are many reasons for this, advances in nutritional sciences are a key one. This knowledge has led to the elimination of nutritional deficiencies. Most people don’t worry too much about rickets, goiters or scurvy in North America these days. </p>
<p>In the future, however, additional research in nutrition is going to lead to less remarkable gains in quality and length of life, measured in days, not years.</p>
<p>While the war of words among scientists and public health officials continue, the real disservice is to the general public who look to us for leadership. Over time this ongoing inflamed rhetoric begins to turn into white noise, which gets ignored at best, and can diminish the trust in nutrition science. </p>
<p>One may wonder if we should stop nutritional research altogether until we can get it right.</p>
<p><em>Scott Lear writes the weekly blog <a href="https://drscottlear.com/">Feel Healthy with Dr. Scott Lear</a>.</em></p>
<p>[ <em><a href="https://theconversation.com/ca/newsletters?utm_source=TCCA&utm_medium=inline-link&utm_campaign=newsletter-text&utm_content=expertise">Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day.</a></em> ]</p><img src="https://counter.theconversation.com/content/125048/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Scott Lear has received research funding from the Canadian Institutes of Health Research, the Heart and Stroke Foundation, Novo Nordisk, Hamilton Health Sciences and the Robert Wood Johnson Foundation.</span></em></p>New research claiming that people do not need to reduce their consumption of red and processed meat says more about the conduct and evaluation of research than it does about beef.Scott Lear, Professor of Health Sciences, Simon Fraser UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1215042019-08-16T04:14:43Z2019-08-16T04:14:43ZNo, eating chocolate won’t cure depression<figure><img src="https://images.theconversation.com/files/288093/original/file-20190815-136190-1d6p0b3.jpg?ixlib=rb-1.1.0&rect=0%2C1%2C1000%2C664&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If you're depressed, the headlines might tempt you to reach out for a chocolate bar. But don't believe the hype.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/elderly-person-eating-sweets-173815130?src=J4SYOBmC2mFq6Ig9LKj6UQ-1-21">from www.shutterstock.com</a></span></figcaption></figure><p>A recent study published in the journal <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/da.22950">Depression and Anxiety</a> has attracted <a href="https://7news.com.au/lifestyle/health-wellbeing/dark-chocolate-could-boost-mood-study-c-378548">widespread media attention</a>. Media reports <a href="https://www.google.com/search?q=chocolate+depression&client=firefox-b-d&source=lnms&tbm=nws&sa=X&ved=0ahUKEwjYuqGh14PkAhXX73MBHRnOAysQ_AUIEygD&biw=1522&bih=687">said</a> eating chocolate, in particular, dark chocolate, was linked to reduced symptoms of depression.</p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1159459341696716800"}"></div></p>
<p>Unfortunately, we cannot use this type of evidence to promote eating chocolate as a safeguard against depression, a serious, common and sometimes debilitating mental health condition.</p>
<p>This is because this study looked at an <em>association</em> between diet and depression in the general population. It did not gauge causation. In other words, it was not designed to say whether eating dark chocolate <em>caused</em> a reduction in depressive symptoms.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/what-causes-depression-what-we-know-dont-know-and-suspect-81483">What causes depression? What we know, don’t know and suspect</a>
</strong>
</em>
</p>
<hr>
<h2>What did the researchers do?</h2>
<p>The authors explored data from the United States <a href="https://www.cdc.gov/nchs/nhanes/index.htm">National Health and Nutrition Examination Survey</a>. This shows how common health, nutrition and other factors are among a representative sample of the population. </p>
<p>People in the study reported what they had eaten in the previous 24 hours in two ways. First, they recalled in person, to a trained dietary interviewer using a standard questionnaire. The second time they recalled what they had eaten over the phone, several days after the first recall.</p>
<p>The researchers then calculated how much chocolate participants had eaten using the average of these two recalls.</p>
<p>Dark chocolate needed to contain at least 45% cocoa solids for it to count as “dark”.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-what-is-memory-9035">Explainer: what is memory?</a>
</strong>
</em>
</p>
<hr>
<p>The researchers excluded from their analysis people who ate an implausibly large amount of chocolate, people who were underweight and/or had diabetes. </p>
<p>The remaining data (from 13,626 people) was then divided in two ways. One was by categories of chocolate consumption (no chocolate, chocolate but no dark chocolate, and any dark chocolate). The other way was by the amount of chocolate (no chocolate, and then in groups, from the lowest to highest chocolate consumption).</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/mondays-medical-myth-chocolate-is-an-aphrodisiac-4980">Monday's medical myth: chocolate is an aphrodisiac </a>
</strong>
</em>
</p>
<hr>
<p>The researchers assessed people’s depressive symptoms by having participants complete a short questionnaire asking about the frequency of these symptoms over the past two weeks.</p>
<p>The researchers controlled for other factors that might influence any relationship between chocolate and depression, such as weight, gender, socioeconomic factors, smoking, sugar intake and exercise.</p>
<h2>What did the researchers find?</h2>
<p>Of the entire sample, 1,332 (11%) of people said they had eaten chocolate in their two 24 hour dietary recalls, with only 148 (1.1%) reporting eating dark chocolate.</p>
<p>A total of 1,009 (7.4%) people reported depressive symptoms. But after adjusting for other factors, the researchers found no association between any chocolate consumption and depressive symptoms.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/288094/original/file-20190815-136186-kvk3wj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Few people said they’d eaten any chocolate in the past 24 hours. Were they telling the truth?</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/chocolate-bar-foil-on-gray-background-329714852">from www.shutterstock.com</a></span>
</figcaption>
</figure>
<p>However, people who ate dark chocolate had a 70% lower chance of reporting clinically relevant depressive symptoms than those who did not report eating chocolate.</p>
<p>When investigating the amount of chocolate consumed, people who ate the most chocolate were more likely to have fewer depressive symptoms.</p>
<h2>What are the study’s limitations?</h2>
<p>While the size of the dataset is impressive, there are major limitations to the investigation and its conclusions. </p>
<p>First, assessing chocolate intake is challenging. People may eat different amounts (and types) depending on the day. And asking what people ate over the past 24 hours (twice) is not the most accurate way of telling what people usually eat.</p>
<p>Then there’s whether people report what they actually eat. For instance, if you ate a whole block of chocolate yesterday, would you tell an interviewer? What about if you were also depressed?</p>
<p>This could be why so few people reported eating chocolate in this study, compared with what <a href="https://www.forbes.com/sites/niallmccarthy/2015/07/22/the-worlds-biggest-chocolate-consumers-infographic/#718514644847">retail figures</a> tell us people eat.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/these-5-foods-are-claimed-to-improve-our-health-but-the-amount-wed-need-to-consume-to-benefit-is-a-lot-116730">These 5 foods are claimed to improve our health. But the amount we'd need to consume to benefit is... a lot</a>
</strong>
</em>
</p>
<hr>
<p>Finally, the authors’ results are mathematically accurate, but misleading.</p>
<p>Only 1.1% of people in the analysis ate dark chocolate. And when they did, the amount was very small (about 12g a day). And only two people reported clinical symptoms of depression and ate any dark chocolate.</p>
<p>The authors conclude the small numbers and low consumption “attests to the strength of this finding”. I would suggest the opposite.</p>
<p>Finally, people who ate the most chocolate (104-454g a day) had an almost 60% lower chance of having depressive symptoms. But those who ate 100g a day had about a 30% chance. Who’d have thought four or so more grams of chocolate could be so important? </p>
<p>This study and the media coverage that followed are perfect examples of the pitfalls of translating population-based nutrition research to public recommendations for health. </p>
<p>My general advice is, if you enjoy chocolate, go for darker varieties, with fruit or nuts added, and eat it <a href="https://theconversation.com/we-dont-yet-fully-understand-what-mindfulness-is-but-this-is-what-its-not-110698">mindfully</a>. — <strong>Ben Desbrow</strong></p>
<hr>
<h2>Blind peer review</h2>
<p>Chocolate manufacturers have been a good source of <a href="https://forbetterscience.com/2016/05/19/chocolate-is-good-for-your-funding/">funding</a> for much of the <a href="https://www.foodpolitics.com/2015/10/heres-why-food-companies-sponsor-research-mars-inc-s-cocoavia/">research</a> into chocolate products.</p>
<p>While the authors of this new study declare no conflict of interest, any whisper of good news about chocolate attracts publicity. I agree with the author’s scepticism of the study.</p>
<p>Just 1.1% of people in the study ate dark chocolate (at least 45% cocoa solids) at an average 11.7g a day. There was a wide variation in reported clinically relevant depressive symptoms in this group. So, it is not valid to draw any real conclusion from the data collected.</p>
<p>For total chocolate consumption, the authors accurately report no statistically significant association with clinically relevant depressive symptoms. </p>
<p>However, they then claim eating more chocolate is of benefit, based on fewer symptoms among those who ate the most.</p>
<p>In fact, depressive symptoms were most common in the third-highest quartile (who ate 100g chocolate a day), followed by the first (4-35g a day), then the second (37-95g a day) and finally the lowest level (104-454g a day). Risks in sub-sets of data such as quartiles are only valid if they lie on the same slope.</p>
<p>The basic problems come from measurements and the many confounding factors. This study can’t validly be used to justify eating more chocolate of any kind. — <strong>Rosemary Stanton</strong></p>
<hr>
<p><em><a href="https://theconversation.com/au/topics/research-check-25155">Research Checks</a> interrogate newly published studies and how they’re reported in the media. The analysis is undertaken by one or more academics not involved with the study, and reviewed by another, to make sure it’s accurate.</em></p><img src="https://counter.theconversation.com/content/121504/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Depression is a serious, common and sometimes debilitating condition. And no, chocolate won’t help, whatever the headlines tell you.Ben Desbrow, Associate Professor, Nutrition and Dietetics, Griffith UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1057072018-11-01T10:49:34Z2018-11-01T10:49:34ZNumbers in the news? Make sure you don’t fall for these 3 statistical tricks<figure><img src="https://images.theconversation.com/files/243330/original/file-20181031-122177-1g4ryme.jpg?ixlib=rb-1.1.0&rect=565%2C195%2C3812%2C2802&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">If it seems too good to be true, maybe it is.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/asian-indian-business-people-holding-coffee-335519321">szefei/Shutterstock.com</a></span></figcaption></figure><p><em><a href="https://theconversation.com/como-entender-las-cifras-en-las-noticias-tres-trucos-estadisticos-106206">Leer en español</a></em>.</p>
<p>“Handy bit of research finds sexuality can be determined by the lengths of people’s fingers” was <a href="https://www.thesun.co.uk/tech/7512067/finger-length-sexuality-simon-cowell-norton/">one recent headline</a> based on a peer-reviewed study by well-respected researchers at the University of Essex <a href="https://doi.org/10.1007/s10508-018-1262-z">published in the Archives of Sexual Behavior</a>, the <a href="https://www.researchgate.net/journal/0004-0002_Archives_of_Sexual_Behavior">leading scholarly publication</a> in the area of human sexuality. </p>
<p>And, to <a href="https://scholar.google.com/citations?user=UtiewDkAAAAJ&hl=en&oi=ao">my stats-savvy eye</a>, it is a bunch of hogwash. </p>
<p>Just when it seems that news consumers may be wising up – remembering to ask if science is “<a href="https://doi.org/10.1177/014107680609900414">peer-reviewed</a>,” the sample size is big enough or who funded the work – along comes a suckerpunch of a story. In this instance, the fast one comes in the <a href="https://doi.org/10.1136/bmj.292.6522.746">form of confidence intervals</a>, a statistical topic that no lay person should really ever have to wade through to understand a news article.</p>
<p>But, unfortunately for any number-haters out there, if you don’t want to be fooled by breathless, overhyped or otherwise worthless research, we have to talk about a few statistical principles that could still trip you up, even when all the “legitimate research” boxes are ticked.</p>
<h2>What’s my real risk?</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=600&fit=crop&dpr=1 600w, https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=600&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=600&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=754&fit=crop&dpr=1 754w, https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=754&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/243017/original/file-20181030-76416-1d7g8gn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=754&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Yum?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/kaige/9989706193">Leo/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>One of the most depressing headlines I ever read was “<a href="https://www.news.com.au/lifestyle/health/diet/eightyear-study-finds-heavy-french-fry-eaters-have-double-the-chance-of-death/news-story/1a557be079d7947380c90924dc2f0d15">Eight-year study finds heavy French fry eaters have ‘double’ the chance of death</a>.” “Ugh,” I said out loud, sipping my glass of red wine with a big ole basket of perfectly golden fries in front of me. Really?</p>
<p>Well, yes, it’s true according to a <a href="https://doi.org/10.3945/ajcn.117.154872">peer-reviewed study published</a> in the American Journal of Clinical Nutrition. Eating french fries does double your risk of death. But, how many french fries, and moreover, what was my original risk of death? </p>
<p>The study says that if you eat fried potatoes three times per week or more, you will double your risk of death. So let’s take an average person in this study: a 60-year-old man. What is his risk of death, regardless of how many french fries he eats? One percent. That means that if you line up 100 60-year-old men, at least one of them will die in the next year simply because he is a 60-year-old man.</p>
<p>Now, if all 100 of those men eat fried potatoes at least three times per week for their whole lives, yes, their risk of death doubles. But what is 1 percent doubled? Two percent. So instead of one of those 100 men dying over the course of the year, two of them will. And they get to eat fried potatoes three times a week or more for their entire lives – sounds like a risk I’m willing to take.</p>
<p>This is a statistical concept called <a href="https://understandinguncertainty.org">relative risk</a>. If the chance of getting some disease is 1 in a billion, even if you quadruple your risk of coming down with it, your risk is still only 4 in a billion. It ain’t gonna happen.</p>
<p>So next time you see an increase or decrease in risk, the first question you should ask is “an increase or decrease in risk from what original risk.”</p>
<p>Plus, like me, could those men have been enjoying a glass of wine or pint of beer with their fried potatoes? Could something else have actually been the culprit? </p>
<h2>Eating cheese before bed equals die by tangled bedsheets?</h2>
<p>Baby boxes have become a <a href="https://www.ajc.com/news/national/what-baby-box-and-why-are-some-states-giving-them-new-parents/5Hh8Zk1AvhQd6p6IcNhXQI/">trendy state-sponsored gift</a> to new parents, meant to provide newborns with a safe place to sleep. The initiative grew from a Finnish effort started in the late 1930s to reduce sleep-related death in infants. The cardboard box includes a few essentials: some diapers, baby wipes, a onesie, breast pads and so on. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/243021/original/file-20181030-76402-7atk0g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Contents of a Finnish ‘maternity package’ before a newborn baby moves in.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/roxeteer/2037806537">Visa Kopu/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>Finland’s infant mortality rate decreased at a rapid rate with the introduction of these baby boxes, and the country now has one of the <a href="https://data.worldbank.org/indicator/SP.DYN.IMRT.IN?locations=FI">lowest infant mortality rates in the world</a>. So it makes sense to suppose that these baby boxes caused the infant mortality rate to go down.</p>
<p>But guess what also changed? <a href="https://www.bbc.com/news/magazine-39366596">Prenatal care</a>. In order to qualify for the baby box, a woman was required to visit health clinics starting during the first four months of her pregnancy.</p>
<p>In 1944, 31 percent of Finnish mothers received prenatal education. In 1945, it had jumped to 86 percent. The baby box was not responsible for the change in infant mortality rates; rather, it was education and early health checks.</p>
<p>This is a classic case of <a href="http://senseaboutscienceusa.org/causation-vs-correlation/">correlation not being the same as causation</a>. The introduction of baby boxes and the decrease in infant mortality rates are related but one didn’t cause the other.</p>
<p>However, that little fact hasn’t stopped baby box companies from popping up left, right and center, selling things like the “Baby Box Bundle: Finland Original” for a mere US$449.99. And <a href="https://ssir.org/articles/entry/us_states_embrace_baby_boxes">U.S. states use tax dollars</a> to hand a version out to new mothers.</p>
<p>So the next time you see a link or association – like how eating cheese is linked to dying by <a href="http://tylervigen.com/view_correlation?id=7">becoming entangled in your bedsheets</a> – you should ask “What else could be causing that to happen?”</p>
<h2>When margin of error is bigger than the effect</h2>
<p><a href="https://www.bls.gov/news.release/empsit.htm">Recent numbers from the Bureau of Labor Statistics</a> show national unemployment dropping from 3.9 percent in August to 3.7 percent in September. When compiling these figures, the bureau obviously doesn’t go around asking every person whether they have a job or not. It asks a small sample of the population and then generalizes the unemployment rate in that group to the entire United States.</p>
<p>This means the official level of unemployment at any given time is an estimate – a good guess, but still a guess. This “plus or minus error” is defined by something statisticians call a <a href="https://www.khanacademy.org/math/statistics-probability/confidence-intervals-one-sample">confidence interval</a>. </p>
<p>What the data actually says is that it appears the number of unemployed people nationwide <a href="https://www.bls.gov/web/empsit/cpssigsuma.pdf">decreased by 270,000</a> – but with a margin of error, as defined by the confidence interval, of plus or minus 263,000. It’s easier to announce a single number like 270,000. But sampling always comes with a margin of error and it’s more accurate to think of that single estimate as a range. In this case, statisticians believe the real number of unemployed people went down by somewhere between just 7,000 on the low end and 533,000 on the high end.</p>
<p>This is the same issue that happened with the finger length defining sexuality study - the plus or minus error associated with these estimates can simply negate any certainty in the results. </p>
<p>The most obvious example of confidence intervals making our lives confusing is in polling. Pollsters take a sample of the population, ask who that sample is going to vote for, and then infer from that what the entire population is going to do on Election Day. When the races are close, the plus or minus error associated with their polls of the sample negate any real knowledge of who is going to win, making the races “too close to call.”</p>
<p>So the next time you see a number being stated about an entire population where it would have been impossible to ask every single person or test every single subject, you should ask about the plus or minus error.</p>
<p>Will knowing these three aspects of statistical misleads mean that you never get fooled? Nope. But they sure will help.</p><img src="https://counter.theconversation.com/content/105707/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Liberty Vittert does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Shrewd media consumers think about these three statistical pitfalls that can be the difference between a world-changing announcement and misleading hype.Liberty Vittert, Visiting Assistant Professor in Statistics, Washington University in St. LouisLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/785392017-06-01T14:02:44Z2017-06-01T14:02:44ZSocial policies work best if they’re bespoke solutions to local problems<figure><img src="https://images.theconversation.com/files/171615/original/file-20170531-25664-1mhx9hv.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Cycle lanes work in Florence, Italy. That doesn't mean they'll work everywhere.</span> <span class="attribution"><span class="source">REUTERS/Max Rossi</span></span></figcaption></figure><p>My morning commute to work in Johannesburg takes me past city streets flanked by the strange strips of green-painted road surface that some people call “bicycle lanes”. But to call them that flies in the face of experience.</p>
<p>Usually these lanes are occupied by cars using them as makeshift parking bays; taxis veering to a halt to drop off or pick up passengers – and very occasionally by a brave pedestrian. The one thing I’m pretty confident of never seeing in these lanes is a bicycle.</p>
<p>This isn’t a rant about roads, though. It’s an example that calls attention to an inconvenient fact for policymakers who must make decisions about how to improve societies. Successful policy interventions, especially those in the social realm influenced by the vagaries of human behaviour, don’t seem to travel well. </p>
<p>To paraphrase philosopher Nancy Cartwright’s <a href="https://www.amazon.com/Evidence-Based-Policy-Practical-Guide-Better/dp/0199841624">warning</a>: “Just because it worked there, doesn’t mean it will work here”. Policies and interventions that work really well in one context often fail dismally in others. These failures can be extremely difficult to pre-empt.</p>
<p>Johannesburg’s cycling lane <a href="https://theconversation.com/johannesburgs-bike-lanes-are-not-well-used-heres-why-75068">debacle</a> nicely illustrates a research problem known as <a href="https://www.socialresearchmethods.net/kb/external.php">external validity</a>. This is basically about determining whether causal relationships transport to different environments or can be generalised across many environments. Simply put, the puzzle is why cycling lanes cause a reduction in traffic in some cities but not in others. </p>
<h2>Different contexts, different solutions</h2>
<p>The <a href="https://theconversation.com/johannesburgs-bike-lanes-are-not-well-used-heres-why-75068">failure</a> of Johannesburg’s cycle lanes has very little to do with the way they were implemented. The real problem is that policymakers should have foreseen that cycle lanes were never the right intervention in the first place, if the aim was to find an effective way of alleviating the city’s traffic problem. </p>
<p>It’s easy to be smug with the benefit of hindsight. But if we consider things from the perspective of those who had to make the decision, opting for the cycle lanes isn’t as risible as it seems now. Their thinking must have gone as follows: </p>
<blockquote>
<p>“We have a serious traffic problem. What have other big cities done to improve congestion? Answer: bicycle lanes. Solution: build bicycle lanes.”</p>
</blockquote>
<p>The mistake, I believe, was trying to import a successful mechanism – in this case in the form of infrastructure – from a different context, and expecting it to have the same effect in the local environment. Demonstrating the effectiveness of these mechanisms is one thing. It’s inferring from this effectiveness that the same mechanism will be effective in other contexts that can easily lead people astray.</p>
<p>Modern science is becoming increasingly adept at developing sophisticated methods for discovering mechanisms that underpin causal relationships. </p>
<p>This approach has worked well in the health sciences. It continues to yield important <a href="http://www.aicr.org/continuous-update-project/reports/breast-cancer-report-2017.pdf">new breakthroughs</a> in our knowledge about lifestyle factors in cancers. Attempts to identify similar mechanisms in the social sciences often result in failure. Nancy Cartwright’s discussion of <a href="https://www.amazon.com/Evidence-Based-Policy-Practical-Guide-Better/dp/0199841624">the failure</a> of the World Health Organisation’s nutrition programme in Bangladesh is a good example. Its success in India was falsely thought to be a good reason to implement it elsewhere.</p>
<p>Thinking this way would have encouraged the false opinion that as long as Johannesburg copied the correct mechanism, its traffic problem would be solved.</p>
<p>A more illuminating approach to dealing with external validity problems is to start with an analysis of the “human ecosystem” that brings about the conditions responsible for the problem. In the same way that we pay attention to the conditions that support life in natural ecosystems, this view encourages us to identify similar conditions for human populations</p>
<h2>Human ecosystems thinking</h2>
<p>In diagnosing Johannesburg’s traffic congestion, attention should have been paid to some fundamental questions about the broader socio-economic factors influencing the city’s transportation network. This should have included a thorough analysis of where people live and work, how far they have to travel and why they choose their preferred methods of transport.</p>
<p>One factor that such thinking would have unearthed is the <a href="http://www.patrickheller.com/uploads/1/5/3/7/15377686/ijur_cartography.pdf">spatial separations</a> brought about by apartheid. Townships, where a significant proportion of the city’s workforce live, are situated on the outskirts of the metropolis. That’s far away from the city’s economically active areas where the bulk of the jobs are. </p>
<p>So, the people most adversely affected by Johannesburg’s traffic problem live too far from their workplaces to even consider cycling as a feasible solution. Those who can afford to live closer to their jobs are typically not tempted by the little money they would save. </p>
<p>This is why the city’s cycling experiment fell short of the critical mass needed to make it work.</p>
<p>A big advantage of this type of ecosystems thinking is that, instead of misguided attempts at importing foreign solutions, it encourages us to attend to local problems by paying closer attention to the local context. Policymakers are pushed to develop solutions inspired by local knowledge and sensibilities.</p>
<p>A more locally-driven approach would instead emphasise practical ways of linking township residents with more economically active areas. This might be done by expanding existing infrastructure which already does this, such as the <a href="https://www.reavaya.org.za/">Rea Vaya</a> bus routes. Or some jobs might be moved to townships.</p>
<p>Some of the solutions inspired by an ecosystems approach might seem unconventional at first – because they would be unprecedented. But this was the way people felt about innovations like <a href="http://www.economist.com/blogs/economist-explains/2013/05/economist-explains-18">Kenya’s M-Pesa</a> mobile money system. If we want bespoke solutions to unique local problems, we shouldn’t expect to find them elsewhere.</p><img src="https://counter.theconversation.com/content/78539/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Chad Harris does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Successful policy interventions, especially those in the social realm influenced by the vagaries of human behaviour, don’t seem to travel well.Chad Harris, African Centre for Epistemology and Philosophy of Science (ACEPS), Philosophy Department, University of JohannesburgLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/743062017-03-28T19:04:21Z2017-03-28T19:04:21ZThe seven deadly sins of statistical misinterpretation, and how to avoid them<figure><img src="https://images.theconversation.com/files/162827/original/image-20170328-21243-6xrdpk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Where are the error bars?</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p><em>Statistics is a useful tool for understanding the patterns in the world around us. But our intuition often lets us down when it comes to interpreting those patterns. In this series we look at some of the common mistakes we make and how to avoid them when thinking about <a href="https://theconversation.com/au/topics/statistics-probability-and-risk-37151">statistics, probability and risk</a>.</em></p>
<hr>
<h2>1. Assuming small differences are meaningful</h2>
<p>Many of the daily fluctuations in the stock market represent chance rather than anything meaningful. Differences in polls when one party is ahead by a point or two are often just statistical noise.</p>
<p>You can avoid drawing faulty conclusions about the causes of such fluctuations by demanding to see the “margin of error” relating to the numbers. </p>
<p>If the difference is smaller than the margin of error, there is likely no meaningful difference, and the variation is probably just down to random fluctuations.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=338&fit=crop&dpr=1 600w, https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=338&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=338&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=424&fit=crop&dpr=1 754w, https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=424&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/162565/original/image-20170327-18974-kee9sg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=424&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Error bars illustrate the degree of uncertainty in a score. When such margins of error overlap, the difference is likely to be due to statistical noise.</span>
</figcaption>
</figure>
<hr>
<h2>2. Equating statistical significance with real-world significance</h2>
<p>We often hear generalisations about how two groups differ in some way, such as that women are more nurturing while men are physically stronger. </p>
<p>These differences often draw on stereotypes and folk wisdom but often ignore the similarities in people between the two groups, and the variation in people within the groups.</p>
<p>If you pick two men at random, there is likely to be quite a lot of difference in their physical strength. And if you pick one man and one woman, they may end up being very similar in terms of nurturing, or the man may be more nurturing than the woman.</p>
<p>You can avoid this error by asking for the “effect size” of the differences between groups. This is a measure of how much the average of one group differs from the average of another. </p>
<p>If the effect size is small, then the two groups are very similar. Even if the effect size is large, the two groups will still likely have a great deal of variation within them, so not all members of one group will be different from all members of another group.</p>
<hr>
<h2>3. Neglecting to look at extremes</h2>
<p>The flipside of effect size is relevant when the thing that you’re focusing on follows a “<a href="https://en.wikipedia.org/wiki/Normal_distribution">normal distribution</a>” (sometimes called a “bell curve”). This is where most people are near the average score and only a tiny group is well above or well below average. </p>
<p>When that happens, a small change in performance for the group produces a difference that means nothing for the average person (see point 2) but that changes the character of the extremes more radically. </p>
<p>Avoid this error by reflecting on whether you’re dealing with extremes or not. When you’re dealing with average people, small group differences often don’t matter. When you care a lot about the extremes, small group differences can matter heaps.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=312&fit=crop&dpr=1 600w, https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=312&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=312&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=392&fit=crop&dpr=1 754w, https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=392&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/162567/original/image-20170327-18998-1s1jqcj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=392&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">When two populations follow a normal distribution, the differences between them will be more apparent at the extremes than in the averages.</span>
</figcaption>
</figure>
<hr>
<h2>4. Trusting coincidence</h2>
<p>Did you know there’s a <a href="http://www.tylervigen.com/spurious-correlations">correlation</a> between the number of people who drowned each year in the United States by falling into a swimming pool and number of films Nicholas Cage appeared in?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=237&fit=crop&dpr=1 600w, https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=237&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=237&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=297&fit=crop&dpr=1 754w, https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=297&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/162122/original/image-20170323-25751-2a2g8r.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=297&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">But is there a causal link?</span>
<span class="attribution"><a class="source" href="http://www.tylervigen.com/spurious-correlations">tylervigen.com</a></span>
</figcaption>
</figure>
<p>If you look hard enough you can find interesting patterns and correlations that are merely due to coincidence. </p>
<p>Just because two things happen to change at the same time, or in similar patterns, does not mean they are related.</p>
<p>Avoid this error by asking how reliable the observed association is. Is it a one-off, or has it happened multiple times? Can future associations be predicted? If you have seen it only once, then it is likely to be due to random chance.</p>
<hr>
<h2>5. Getting causation backwards</h2>
<p>When two things are correlated – say, unemployment and mental health issues – it might be tempting to see an “obvious” causal path – say that mental health problems lead to unemployment. </p>
<p>But sometimes the causal path goes in the other direction, such as unemployment causing mental health issues.</p>
<p>You can avoid this error by remembering to think about reverse causality when you see an association. Could the influence go in the other direction? Or could it go both ways, creating a feedback loop? </p>
<hr>
<h2>6. Forgetting to consider outside causes</h2>
<p>People often fail to evaluate possible “third factors”, or outside causes, that may create an association between two things because both are actually outcomes of the third factor.</p>
<p>For example, there might be an association between eating at restaurants and better cardiovascular health. That might lead you to believe there is a causal connection between the two.</p>
<p>However, it might turn out that those who can afford to eat at restaurants regularly are in a high socioeconomic bracket, and can also afford better health care, and it’s the health care that affords better cardiovascular health. </p>
<p>You can avoid this error by remembering to think about third factors when you see a correlation. If you’re following up on one thing as a possible cause, ask yourself what, in turn, causes that thing? Could that third factor cause both observed outcomes? </p>
<hr>
<h2>7. Deceptive graphs</h2>
<p>A lot of mischief occurs in the scaling and labelling of the vertical axis on graphs. The labels should show the full meaningful range of whatever you’re looking at. </p>
<p>But sometimes the graph maker chooses a narrower range to make a small difference or association look more impactful. On a scale from 0 to 100, two columns might look the same height. But if you graph the same data only showing from 52.5 to 56.5, they might look drastically different.</p>
<p>You can avoid this error by taking care to note graph’s labels along the axes. Be especially sceptical of unlabelled graphs.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=183&fit=crop&dpr=1 600w, https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=183&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=183&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=229&fit=crop&dpr=1 754w, https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=229&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/161027/original/image-20170315-20495-1jjsvtm.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=229&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Graphs can tell a story – making differences look bigger or smaller depending on scale.</span>
</figcaption>
</figure><img src="https://counter.theconversation.com/content/74306/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Winnifred Louis receives funding from the Australian Research Council and the Social Sciences and Humanities Research Council of Canada. She is a teacher of statistics in the University of Queensland, as well as a social psychologist, a peace psychologist, and a longstanding activist for causes such as enviromental sustainability and anti-racism. </span></em></p><p class="fine-print"><em><span>Cassandra Chapman receives funding in the form of a PhD scholarship from the Department of Education and Training of the Australian government. She previously worked in marketing and fundraising for various not-for-profits and still collaborates with organisations in that sector.</span></em></p>Here are some all-too-common errors when it comes to interpreting statistics, and how to avoid them.Winnifred Louis, Associate Professor, Social Psychology, The University of QueenslandCassandra Chapman, PhD Candidate in Social Psychology, The University of QueenslandLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/619402016-07-05T15:47:25Z2016-07-05T15:47:25ZEver noticed time seems to move faster when you’re in control of things? Science can explain why<figure><img src="https://images.theconversation.com/files/129429/original/image-20160705-795-1dtivvz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>We’ve all been there: waiting for a boring meeting to finish or for a bus to arrive and time just seems to drag on far more slowly than usual. Yet our most enjoyable moments seem to whizz by at lightning speed. It seems obvious that more boring events appear to take longer than the ones that stimulate us. But there’s another reason we sometimes experience time differently.</p>
<p>If we understand what causes an event or we cause it ourselves, the time between the cause and its effects seems to be shorter than an event we have no control over. This phenomenon, known as temporal binding, can help us uncover some important truths about the relationship between cause and effect and whether or not we are really responsible for different actions.</p>
<p>Temporal binding works in a curious way. The cause of an event seems to be shifted later in time towards its effect, which in turn is shifted backwards in time towards the cause. From our perspective, the two events are drawn in towards each other, essentially bound to one another in time.</p>
<p>Patrick Haggard and his colleagues at UCL <a href="http://www.nature.com/neuro/journal/v5/n4/full/nn827.html">were the first</a> to come across this phenomenon. They asked volunteers to press a button that produced a sound after a short delay. The volunteers found the action of pressing the button and the consequence of the sound seemed to happen closer together in time than when they weren’t responsible for pushing the button.</p>
<h2>Intentional binding</h2>
<p>The same effect didn’t occur when the tone came after an involuntary muscle twitch (caused by stimulation to the brain), or after another tone following the same delay. So the researchers referred to the phenomenon as “intentional binding” as they believed that it was the person’s voluntary involvement (and so their intention to act) that bound the action and consequence together in time. Because of this, the phenomenon was <a href="http://www.sciencedirect.com/science/article/pii/S1053810011000389">quickly seen</a> as a new way of assessing how much people feel in control in certain situations without having to actually ask them.</p>
<p>Recently, researchers have even applied temporal binding to the famous Milgram electric shock experiment to see if people feel responsible for actions they have been coerced into doing. Milgram’s <a href="http://hisser.net/OL/2120/Obedience.pdf">original experiment</a> involved instructing participants to administer electric shocks to each other in order to see if people would obey an order that caused harm.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/129433/original/image-20160705-814-1myyj5b.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">In control of time.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>Haggard used a similar setup but also asked participants to estimate the time between when they pressed the button that caused the shock and the time when the shock was administered. The <a href="http://www.sciencedirect.com/science/article/pii/S096098221600052X">researchers found</a> that when the participant was coerced into giving an electric shock they experienced the time between their action and the outcome as longer than when they choose to act voluntarily.</p>
<p>Based on this, the researchers concluded that when someone is coerced into doing something they feel less in control or less responsible for their own actions than when they carry out actions voluntarily. This has fascinating implications for situations such as war crimes trials, where defendants often claim they were <a href="http://ejil.org/pdfs/10/1/571.pdf">simply obeying orders</a> and so aren’t responsible for their actions.</p>
<p>Temporal binding has also been used to study medical conditions and produced some interesting results there too. Researchers <a href="http://bit.ly/29hy1VQ">have found</a> that people with schizophrenia experience greater temporal binding than those without the condition. This suggests sufferers feel an exaggerated sense of control over the outcome of their actions, which may help explain why they delusionally believe they have control over things that they could not be plausibly responsible for.</p>
<h2>Cause not control</h2>
<p>Although temporal binding has been quickly adopted as a way of measuring feelings of control and responsibility, Marc Buehner at Cardiff University has shown that this effect is more likely to be about causal relationships. <a href="https://www.researchgate.net/profile/Marc_Buehner2/publication/232721138_Understanding_the_Past_Predicting_the_Future_Causation_Not_Intentional_Action_Is_the_Root_of_Temporal_Binding/links/55420c700cf21b21437591bc.pdf">Buehner found</a> that we experience binding when we simply observe one thing causing another, even when we aren’t directly responsible for it. For example, when a mechanical lever presses a button that then produces a sound.</p>
<p>This essentially shows that our experience of time can be influenced and shaped by our beliefs about cause and effect. Binding is still greater when there is human action involved, but this is likely due to human action and consequence simply being a <a href="http://bit.ly/29gBhBq">special type of cause and effect</a>.</p>
<p>An interesting suggestion is that binding occurs as a way for us to learn about the world. Perhaps we parcel up events that are related to one another to help us more clearly understand how the world works, how things relate to one another and how our actions impact the world around us. To test this theory, researchers at Queen’s University, Belfast and Cardiff University are in the process of looking at how children experience binding. Perhaps children experience greater binding as a way of efficiently learning about a world that they have less understanding of than adults.</p>
<p>On the other hand, children may experience binding to a lesser extent than adults because they may simply be less able to select and use information from their environment. Alternatively, binding may be steady throughout our lives and reflect an inbuilt and unchanging way of experiencing and learning about the world. Whatever the outcome, this research could provide us with invaluable information about how we learn about the world.</p><img src="https://counter.theconversation.com/content/61940/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Sara Lorimer receives funding from The Leverhulme Trust. </span></em></p>Understanding why time seems to speed up under certain conditions could reveal when we really feel responsible for our actions.Sara Lorimer, Doctoral Candidate, Psychology, Queen's University BelfastLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/307612014-09-22T20:27:30Z2014-09-22T20:27:30ZClearing up confusion between correlation and causation<figure><img src="https://images.theconversation.com/files/58128/original/rkkrgpjm-1409725506.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">An example of unidirectional cause and effect: bad weather means umbrella sales rise, but buying umbrellas won't make it rain.</span> <span class="attribution"><a class="source" href="http://www.flickr.com/photos/moionet/3747677180">Mariusz Olszewski/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p><em>UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Today we look at the dangers of making a link between unrelated results.</em></p>
<hr>
<p>Here’s an historical tidbit you may not be aware of. Between the years 1860 and 1940, as the number of Methodist ministers living in New England increased, so too did the amount of Cuban rum imported into Boston – and they both increased in an extremely similar way. Thus, Methodist ministers must have bought up lots of rum in that time period!</p>
<p>Actually no, that’s a silly conclusion to draw. What’s really going on is that both quantities – Methodist ministers and Cuban rum – were driven upwards by other factors, such as population growth.</p>
<p>In reaching that incorrect conclusion, we’ve made the far-too-common mistake of <a href="http://montemath.com/Alg1_U7_Pirates.pdf">confusing correlation with causation</a>. </p>
<h2>What’s the difference?</h2>
<p>Two quantities are said to be <em>correlated</em> if both increase and decrease together (“positively correlated”), or if one increases when the other decreases and vice-versa (“negatively correlated”).</p>
<p>Correlation is readily detected through statistical measurements of the <a href="http://www.statisticshowto.com/what-is-the-pearson-correlation-coefficient/">Pearson’s correlation coefficient</a>, which indicates how tightly locked together the two quantities are, ranging from -1 (perfectly negatively correlated) through 0 (not at all correlated) and up to 1 (perfectly positively correlated).</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=317&fit=crop&dpr=1 600w, https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=317&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=317&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=398&fit=crop&dpr=1 754w, https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=398&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/57812/original/5z99fdvs-1409538123.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=398&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="http://tylervigen.com/">tylervigen.com</a></span>
</figcaption>
</figure>
<p>But just because two quantities are correlated does not necessarily mean that one is directly <em>causing</em> the other to change. <a href="http://advan.physiology.org/content/34/4/186">Correlation does not imply causation</a>, just like cloudy weather does not imply rainfall, even though the reverse is true. </p>
<p>If two quantities are correlated then there might well be a genuine cause-and-effect relationship (such as rainfall levels and umbrella sales), but maybe other variables are driving both (such as <a href="http://www.venganza.org/2008/04/pirates-temperature/">pirate numbers and global warming</a>), or perhaps it’s just coincidence (such as <a href="http://www.tylervigen.com">US cheese consumption and strangulations-by-bedsheet</a>). </p>
<p>Even where causation is present, we must be careful not to mix up the cause with the effect, or else we might conclude, for example, that an increased use of heaters causes colder weather.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=398&fit=crop&dpr=1 600w, https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=398&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=398&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=500&fit=crop&dpr=1 754w, https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=500&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/57787/original/69cs62jn-1409532160.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=500&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Wrapping up against the cold.</span>
<span class="attribution"><a class="source" href="http://www.flickr.com/photos/breatheindigital/4957005893/">Ryan Hyde/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In order to establish cause-and-effect, we need to go beyond the statistics and look for separate evidence (of a scientific or historical nature) and logical reasoning. Correlation may prompt us to go looking for such evidence in the first place, but it is by no means a proof in its own right.</p>
<h2>Subtle issues</h2>
<p>Although the above examples were obviously silly, correlation is very often mistaken for causation in ways that are not immediately obvious in the real world. When reading and interpreting statistics, one must take great care to understand exactly what the data and its statistics are implying – and more importantly, what they are <em>not</em> implying. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=315&fit=crop&dpr=1 600w, https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=315&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=315&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=396&fit=crop&dpr=1 754w, https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=396&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/57815/original/tdj423nv-1409538609.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=396&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="http://tylervigen.com/">tylervigen.com/</a></span>
</figcaption>
</figure>
<p>One recent example of the need for caution in interpreting data is the excitement earlier this year surrounding the apparent groundbreaking <a href="http://www.cfa.harvard.edu/news/2014-05">detection of gravitational waves</a> – an announcement that appears to have been made <a href="http://www.nytimes.com/2014/06/20/science/space/scientists-debate-gravity-wave-detection-claim.html?_r=0">prematurely</a>, before all the variables that were affecting the data were accounted for.</p>
<p>Unfortunately, analysing statistics, probabilities and risks is not a skill set wired into our <a href="http://www.wired.com/2012/11/luck-and-skill-untangled-qa-with-michael-mauboussin/">human intuition</a>, and so is all too easy to be led astray. <a href="http://www.amazon.com/How-Lie-Statistics-Darrell-Huff/dp/0393310728">Entire books</a> have been written on the subtle ways in which statistics can be misinterpreted (or used to mislead). To help keep your guard up, here are some common slippery statistical problems that you should be aware of:</p>
<p>1) The Healthy Worker Effect, where sometimes two groups cannot be directly compared on a level playing field.</p>
<p>Consider a hypothetical study comparing the health of a group of office-workers with the health of a group of astronauts. If the study shows no significant difference between the two – no correlation between healthiness and working environment – are we to conclude that living and working in space carries no long-term health risks for astronauts? </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/57811/original/67jz3n4k-1409537943.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/pahudson/2219197593">Paul Hudson/Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>No! The groups are not on the same footing: the astronaut corps screen applicants to find healthy candidates, who then maintain a comprehensive fitness regime in order to proactively combat the effects of living in “microgravity”.</p>
<p>We would therefore expect them to be significant healthier than office workers, on average, and should rightly be concerned if they were not.</p>
<p>2) Categorisation and the Stage Migration Effect – shuffling people between groups can have dramatic effects on statistical outcomes.</p>
<p>This is also known as the <a href="http://www.cmgww.com/historic/rogers/index.html">Will Rogers</a> effect, after the US comedian who reportedly quipped:</p>
<blockquote>
<p>When the Okies left Oklahoma and moved to California, they raised the average intelligence level in both states.</p>
</blockquote>
<p>To illustrate, imagine dividing a large group of friends into a “short” group and a “tall” group (perhaps in order to arrange them for a photo). Having done so, it’s surprisingly easy to raise the average height of both groups at once.</p>
<p>Simply ask the shortest person in the “tall” group to switch over to the “short” group. The “tall”‘ group lose their shortest member, thus bumping up their average height – but the “short” group gain their tallest member yet, and thus also gain in average height.</p>
<p>This has major implications in medical studies, where patients are often sorted into “healthy” or “unhealthy” groups in the course of testing a new treatment. If diagnostic methods improve, some very-slightly-unhealthy patients may be recategorised – leading to the health outcomes of both groups improving, regardless of how effective (or not) the treatment is.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=413&fit=crop&dpr=1 600w, https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=413&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=413&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=519&fit=crop&dpr=1 754w, https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=519&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/57575/original/584qmy8p-1409177390.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=519&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Picking and choosing among the data can lead to the wrong conclusions. The skeptics see period of cooling (blue) when the data really shows long-term warming (green).</span>
<span class="attribution"><span class="source">skepticalscience.com</span></span>
</figcaption>
</figure>
<p>3) Data mining – when an abundance of data is present, bits and pieces can be cherry-picked to support any desired conclusion.</p>
<p>This is bad statistical practice, but <a href="http://www.intelltheory.com/burt.shtml">if done deliberately</a> can be hard to spot without knowledge of the original, complete data set.</p>
<p>Consider the above graph showing two interpretations of global warming data, for instance. Or fluoride – in small amounts it is one of the most effective preventative medicines in history, but the positive effect disappears entirely if one only ever considers toxic quantities of fluoride. </p>
<p>For similar reasons, it is important that the procedures for a given statistical experiment are fixed in place before the experiment begins and then remain unchanged until the experiment ends.</p>
<p>4) Clustering – which is to be expected even in completely random data.</p>
<p>Consider a medical study examining how a particular disease, such as cancer or Multiple sclerosis, is <a href="http://www.nytimes.com/2010/05/11/nyregion/11map.html">geographically distributed</a>. If the disease strikes at random (and the environment has no effect) we would expect to see numerous clusters of patients as a matter of course. If patients are spread out perfectly evenly, the distribution would be most un-random indeed! </p>
<p>So the presence of a single cluster, or a number of small clusters of cases, is entirely normal. Sophisticated statistical methods are needed to determine just how much clustering is required to deduce that something in that area might be causing the illness.</p>
<p>Unfortunately, any cluster at all – even a non-significant one – makes for an easy (and at first glance, compelling) news headline.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=242&fit=crop&dpr=1 600w, https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=242&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=242&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=304&fit=crop&dpr=1 754w, https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=304&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/57571/original/xkwn2zzb-1409175060.png?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=304&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">One must always be wary when drawing conclusions from data!</span>
<span class="attribution"><a class="source" href="http://xkcd.com/552/">Randall Munroe</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Statistical analysis, like any other powerful tool, must be used very carefully – and in particular, one must always be careful when drawing conclusions based on the fact that two quantities are correlated. </p>
<p>Instead, we must always insist on separate evidence to argue for cause-and-effect – and that evidence will not come in the form of a single statistical number. </p>
<p>Seemingly compelling correlations, say between given genes and <a href="http://www.schizophrenia.com/research/hereditygen.htm">schizophrenia</a> or between a <a href="http://www.independent.co.uk/life-style/health-and-families/features/the-science-of-saturated-fat-a-big-fat-surprise-about-nutrition-9692121.html">high fat diet</a> and heart disease, may turn out to be based on very dubious methodology.</p>
<p>We are perhaps as a species cognitively ill prepared to deal with these issues. As Canadian educator <a href="https://www.sfu.ca/%7Eegan/">Kieran Egan</a> put it in his book <a href="https://www.sfu.ca/%7Eegan/wrongindex.html">Getting it Wrong from the Beginning</a>:</p>
<blockquote>
<p>The bad news is that our evolution equipped us to live in small, stable, hunter-gatherer societies. We are Pleistocene people, but our languaged brains have created massive, multicultural, technologically sophisticated and rapidly changing societies for us to live in.</p>
</blockquote>
<p>In consequence, we must constantly resist the temptation to see meaning in chance and to confuse correlation and causation.</p>
<hr>
<p><strong>This article is part of a series on <a href="https://theconversation.com/au/topics/understanding-research">Understanding Research</a>.</strong></p>
<p><strong>Further reading:</strong> <br>
<strong><a href="https://theconversation.com/why-research-beats-anecdote-in-our-search-for-knowledge-30654">Why research beats anecdote in our search for knowledge</a></strong> <br>
<strong><a href="https://theconversation.com/wheres-the-proof-in-science-there-is-none-30570">Where’s the proof in science? There is none</a></strong> <br>
<strong><a href="https://theconversation.com/positives-in-negative-results-when-finding-nothing-means-something-26400">Positives in negative results: when finding ‘nothing’ means something</a></strong><br>
<strong><a href="https://theconversation.com/the-risks-of-blowing-your-own-trumpet-too-soon-on-research-31362">The risks of blowing your own trumpet too soon on research</a></strong> <br>
<strong><a href="https://theconversation.com/how-to-find-the-knowns-and-unknowns-in-any-research-26338">How to find the knowns and unknowns in any research</a></strong> <br>
<strong><a href="https://theconversation.com/how-myths-and-tabloids-feed-on-anomalies-in-science-29337">How myths and tabloids feed on anomalies in science</a></strong> <br>
<strong><a href="https://theconversation.com/the-10-stuff-ups-we-all-make-when-interpreting-research-30816">The 10 stuff-ups we all make when interpreting research</a></strong> <br></p><img src="https://counter.theconversation.com/content/30761/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jonathan Borwein (Jon) receives funding from the ARC.</span></em></p><p class="fine-print"><em><span>Michael Rose does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>UNDERSTANDING RESEARCH: What do we actually mean by research and how does it help inform our understanding of things? Today we look at the dangers of making a link between unrelated results. Here’s an…Jonathan Borwein (Jon), Laureate Professor of Mathematics, University of NewcastleMichael Rose, PhD Candidate, School of Mathematical and Physical Sciences, University of NewcastleLicensed as Creative Commons – attribution, no derivatives.