tag:theconversation.com,2011:/global/topics/research-validity-38297/articlesResearch validity – The Conversation2018-10-25T10:48:06Ztag:theconversation.com,2011:article/1042002018-10-25T10:48:06Z2018-10-25T10:48:06ZOverhype and ‘research laundering’ are a self-inflicted wound for social science<figure><img src="https://images.theconversation.com/files/242169/original/file-20181024-71026-1rxjnlj.jpg?ixlib=rb-1.1.0&rect=11%2C259%2C3733%2C2945&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Overselling slim results can get research findings into the hands of news consumers.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/vintage-newspaper-boy-shouting-latest-news-258824045">durantelallera/Shutterstock.com</a></span></figcaption></figure><p>Earlier this fall, Dartmouth College researchers released a study <a href="https://doi.org/10.1073/pnas.1611617114">claiming to link</a> violent video games to aggression in kids. The logic of a meta-analytic study like this one is that by combining many individual studies, scientists can look for common trends or effects identified in earlier work. Only, as a psychology researcher who’s long focused on this area, I contend this meta-analysis did nothing of the sort. In fact, the magnitude of the effect they found is about the same as that of <a href="https://www.wired.com/story/its-time-for-a-serious-talk-about-the-science-of-tech-addiction/">eating potatoes on teen suicide</a>. If anything, it suggests video games do not predict youth aggression.</p>
<p>This study, and others like it, are symptomatic of a big problem within social science: the overhyping of dodgy, unreliable research findings that have little real-world application. Often such findings shape public perceptions of the human condition and <a href="https://www.law.cornell.edu/supct/pdf/08-1448P.ZO">guide public policy</a> – despite largely being rubbish. Here’s how it happens.</p>
<p>The last few years have seen psychology, in particular, embroiled in what some call a <a href="https://www.theatlantic.com/science/archive/2016/03/psychologys-replication-crisis-cant-be-wished-away/472272/">reproducibility crisis</a>. Many long-cherished findings in social science more broadly have <a href="https://www.washingtonpost.com/news/speaking-of-science/wp/2018/08/27/researchers-replicate-just-13-of-21-social-science-experiments-published-in-top-journals/">proven difficult</a> to replicate under rigorous conditions. When a study is run again, it doesn’t turn up the same results as originally published. The <a href="https://doi.org/10.1126/science.1255484">pressure to publish positive findings</a> and the tendency for researchers to <a href="https://doi.org/10.1126/science.1255484">inject their own biases</a> into analyses intensify the issue. Much of this failure to replicate can be addressed with more transparent and rigorous methods in social science.</p>
<p>But the overhyping of weak results is different. It can’t be fixed methodologically; a solution would need to come from a cultural change within the field. But incentives to be upfront about shortcomings are few, particularly for a field such as psychology, <a href="https://doi.org/10.1037/a0023963">which worries</a> over <a href="http://dx.doi.org/10.1037/a0039405">public perception</a>. </p>
<p>One example is the Implicit Association Test (IAT). This technique is most famous for probing for unconscious racial biases. Given the attention it and the theories based upon it have received, something of a cottage industry has developed to <a href="https://thinkprogress.org/starbucks-ceo-plans-racial-bias-training-89ba69933de2/">train employees about their implicit biases</a> and how to overcome them. Unfortunately, a number of studies suggest the IAT is <a href="https://www.thecut.com/2017/01/psychologys-racism-measuring-tool-isnt-up-to-the-job.html">unreliable and doesn’t predict real-world behavior</a>. Combating racial bias is laudatory, but the considerable public investment in the IAT and the concept of implicit biases is likely less productive than advertised.</p>
<p>Part of the problem is something I call “death by press release.” This phenomenon occurs when researchers or their university, or a journal-publishing organization such as the American Psychological Association, releases a press release that hypes a study’s findings without detailing its limitations. Sensationalistic claims tend to <a href="https://doi.org/10.1089/cyber.2017.0364">get more news attention</a>.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=705&fit=crop&dpr=1 600w, https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=705&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=705&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=886&fit=crop&dpr=1 754w, https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=886&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/242170/original/file-20181024-71017-1u3rl6d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=886&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">An easy tweak to get kids enthusiastically eating their veggies was too good to be true.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/happy-boy-carrot-healthy-food-concept-217315831">ilikestudio/Shutterstock.com</a></span>
</figcaption>
</figure>
<p>For instance, one now notorious food lab at Cornell experienced <a href="https://www.washingtonpost.com/health/2018/09/20/this-ivy-league-food-scientist-was-media-darling-now-his-studies-are-being-retracted/">multiple retractions</a> after it came out that they tortured their data in order to get headline-friendly conclusions. Their research suggested that people ate more when served larger portions, action television shows increased food consumption, and kids’ vegetable consumption would go up if produce was rebranded with kid-friendly themes such as “X-ray vision carrots.” Mainly, lab leader Brian Wansink appears to have become an expert in <a href="https://slate.com/technology/2018/02/how-brian-wansink-forgot-the-difference-between-science-and-marketing.html">marketing social science</a>, even though most of the conclusions were flimsy. </p>
<p>Another concern is a process I call “science laundering” – the cleaning up of dirty, messy, inconclusive science for public consumption. In my own area of expertise, the Dartmouth meta-analysis on video games is a good example. <a href="https://doi.org/10.1177/1745691615592234">Similar evidence</a> to what had been fed into the meta-analysis had been available for years and actually formed the basis for <a href="https://doi.org/10.1111/jcom.12293">why most scholars</a> no longer link violent games to youth assaults.</p>
<p><a href="https://www.sciencemag.org/news/2018/09/meta-analyses-were-supposed-end-scientific-debates-often-they-only-cause-more">Science magazine</a> recently discussed how meta-analyses can be misused to try to prematurely end scientific debates. Meta-analyses can be helpful when they illuminate scientific practices that may cause spurious effects, in order to guide future research. But they can artificially smooth over important disagreements between studies.</p>
<p>Let’s say we hypothesize that eating blueberries cures depression. We run 100 studies to test this hypothesis. Imagine about 25 percent of our experiments find small links between blueberries and reduced depression, whereas the other 75 percent show nothing. Most people would agree this is a pretty poor showing for the blueberry hypothesis. The bulk of our evidence didn’t find any improvement in depression after eating the berries. But, due to a quirk of meta-analysis, combining all 100 of our studies together would show what scientists call a “statistically significant” effect – meaning something that was unlikely to happen just by chance – even though most of the individual studies on their own were not statistically significant.</p>
<p>Merging together even a few studies that show an effect with a larger group of studies that don’t can end up with a meta-analysis result that looks statistically significant – even if the individual studies varied quite a bit. These types of results constitute what some psychologists have called the “<a href="http://goodsciencebadscience.nl/?p=471">crud factor</a>” of psychological research – statistically significant findings that are noise, not real effects that reflect anything in the real world. Or, put bluntly, meta-analyses are a great tool for scholars to fool themselves with.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/242098/original/file-20181024-71035-tvpd3d.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Nobody wants quality research to languish unseen in archives….</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/moonlightbulb/6307961852">Selena N. B. H./Flickr</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Professional guild organizations for fields such as psychology and pediatrics should shoulder much of the blame for the spread of research overhyping. Such organizations release numerous, <a href="https://doi.org/10.1016/0732-118X(95)00025-C">often deeply flawed</a>, policy statements trumpeting research findings in a field. The public often does not realize that such organizations function to market and <a href="https://psychcentral.com/blog/why-the-apa-is-losing-members/">promote a profession</a>; they’re not neutral, objective observers of scientific research – which is often published, <a href="http://ar2016.apa.org/financials/">for income</a>, in their own journals. </p>
<p>Unfortunately, such science laundering can come back to haunt a field when overhyped claims turn out to be misleading. Dishonest overpromotion of social science can cause the public and <a href="https://doi.org/10.4065/mcp.2010.0762">the courts</a> to grow more skeptical of it. Why should taxpayers fund research that is oversold rubbish? Why should media consumers trust what research says today if they were burned by what it said yesterday?</p>
<p>Individual scholars and the professional guilds that represent them can do much to fix these issues by reconsidering lax standards of evidence, the overselling of weak effects, and the current lack of upfront honesty about methodological limitations. In the meantime, the public will do well to continue applying a healthy dose of critical thinking to lofty claims coming from press releases in the social sciences. Ask if the magnitude of effect is significantly greater than for potatoes on suicide. If the answer is no, it’s time to move on.</p><img src="https://counter.theconversation.com/content/104200/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Christopher J. Ferguson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Breathless press releases, over-interpreted meta-analyses and other ‘crud factors’ mean that weak research results can get overhyped to the public. It’s time for a cultural change in the social sciences.Christopher J. Ferguson, Professor of Psychology, Stetson University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/801782017-06-29T01:05:58Z2017-06-29T01:05:58ZTake that chocolate milk survey with a grain of salt<figure><img src="https://images.theconversation.com/files/175921/original/file-20170627-24760-mrp8tm.jpg?ixlib=rb-1.1.0&rect=1310%2C0%2C3784%2C2383&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">And don't expect chocolate ice cream, either.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/barneymoss/15207454576">Barney Moss</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>It’s been all over the news lately: a survey by <a href="http://www.usdairy.com/">the Innovation Center for U.S. Dairy</a> suggests that <a href="https://www.washingtonpost.com/news/wonk/wp/2017/06/15/seven-percent-of-americans-think-chocolate-milk-comes-from-brown-cows-and-thats-not-even-the-scary-part/">7 percent of American adults</a> believe <a href="http://www.foodandwine.com/news/survey-finds-too-many-people-still-think-chocolate-milk-comes-brown-cows">chocolate milk comes from brown cows</a>.</p>
<p>The takeaway of much of this reporting is that Americans are science illiterate as well as <a href="http://www.businessinsider.com/how-is-chocolate-milk-made-survey-brown-cows-2017-6">uninformed about how their food is produced</a>. This interpretation is intuitive: research has suggested that <a href="http://www.pewinternet.org/2015/09/10/what-the-public-knows-and-does-not-know-about-science/">Americans lack understanding of many scientific concepts</a> and the <a href="https://www.washingtonpost.com/news/energy-environment/wp/2015/01/29/americans-are-still-scientifically-illiterate-and-scientists-still-need-a-pr-team/">story line of Americans as woefully ignorant of science</a> is perennial. As a society, we are also urbanizing and <a href="https://www.ers.usda.gov/topics/farm-economy/farm-labor/">fewer people work in agriculture</a>, so it’s unsurprising that many don’t know how food is made. These survey results line up with this prevailing wisdom.</p>
<p>But is this what the survey is actually telling us? To us as researchers studying science communication and public understanding of science, factors in the survey itself and in the way the media report on it raise questions about how much to read into these findings.</p>
<h2>Survey’s results aren’t publicly available</h2>
<p>Researchers are trained to look for the original methods whenever they read a new study, especially if the results are surprising. Learning how the study was done provides information that helps determine whether the science is sound and what to make of it.</p>
<p>The chocolate milk survey is described as a nationally representative survey of 1,000 American adults, but this is impossible to verify without seeing how respondents were selected. Likewise, how the survey was conducted – whether it was a phone or online survey, for instance – can have significant impacts on its accuracy. Research suggests that <a href="http://www.pewresearch.org/2015/05/13/from-telephone-to-the-web-the-challenge-of-mode-of-interview-effects-in-public-opinion-polls/">phone surveys may be less accurate than online surveys</a> because they require people to give their responses out loud to another person instead of quietly clicking away in privacy.</p>
<p>For instance, someone who holds racist views may feel comfortable checking a box about it but might avoid openly professing those opinions on the phone to a stranger. It’s unlikely the chocolate milk survey ran into such problems, but depending on the questions asked, other challenges may have presented themselves.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=502&fit=crop&dpr=1 754w, https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=502&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/175922/original/file-20170627-7455-1fqesmw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=502&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Just to clarify, the recipe includes chocolate and milk.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/shutterbean/6757209625">tracy benjamin</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span>
</figcaption>
</figure>
<p>Likewise, it’s difficult to interpret the results of the chocolate milk question without seeing how it was worded. Poorly phrased or confusing questions abound in survey research and complicate the process of interpreting findings.</p>
<p>An NPR interview with Jean Ragalie-Carr, president of the National Dairy Council, is the closest we can get to the actual wording of potential responses: “<a href="http://www.npr.org/2017/06/16/533255590/alarming-number-of-americans-believe-chocolate-milk-comes-from-brown-cows">there was brown cows, or black-and-white cows, or they didn’t know</a>.” But as Glendora Meikle of the Columbia Journalism Review points out, we don’t know <a href="https://www.cjr.org/analysis/brown-milk-study-cows.php">if those were the only options presented</a> to respondents.</p>
<p>This matters. For instance, if respondents associate <a href="http://www.dairyspot.com/dairy-farming/dairy-farming-facts/types-of-cows/">some color cows with dairy production</a> and other color cows with beef production, it’s easy to see how <a href="http://www.cattlenetwork.com/cattle-news/Differences-between-beef-and-dairy-are-not-always-black-and-white-212016371.html">people could become confused</a>. If this is the case, they’re not confused about where chocolate milk comes from, but about the difference between dairy cows and beef cows.</p>
<p>Social scientists call this a <a href="http://psc.dss.ucdavis.edu/sommerb/sommerdemo/intro/validity.htm">problem with validity</a>: the question doesn’t really measure what it’s supposed to measure. Of course, without seeing how the question was worded, we can’t know whether the chocolate milk question had validity.</p>
<p>Indeed, early media coverage focused on the 7 percent statistic but left out the fact that 48 percent of respondents said they don’t know where chocolate milk comes from. This gives context to the 7 percent number. While it’s conceivable that 7 percent of the population doesn’t know that chocolate milk is just milk with chocolate, the idea that a full 55 percent — over half of adults — don’t know or gave an incorrect response begins to strain credulity. This points toward a confusing survey question.</p>
<p>We reached out to Lisa McComb, the senior vice president of communications for Dairy Management, Inc., about the survey. She confirmed that it’s not publicly available. “The purpose of the survey was to gauge some interesting and fun facts about consumers’ perceptions of dairy, not a scientific or academic study intended to be published,” she told us.</p>
<h2>Story feeds a popular narrative — and media missed it</h2>
<p>Questions about the original findings aside, there’s reason to explore how the media covered the chocolate milk survey.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=903&fit=crop&dpr=1 600w, https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=903&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=903&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1135&fit=crop&dpr=1 754w, https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1135&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/175925/original/file-20170627-24798-gp73ez.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1135&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">At least they knew cows produce milk?</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/usdagov/9733479421">USDA Photo by Bob Nichols</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>The results were instantly shared and republished by a mind-boggling number of outlets (<a href="https://trends.google.com/trends/explore?date=today%201-m&q=%22chocolate%20milk%22%20%22brown%20cows%22">a Google Trends search</a> for “chocolate milk” and “brown cows” shows a spike beginning June 15th). This factoid likely garnered such massive attention because it feeds into a popular narrative about American ignorance and science illiteracy.</p>
<p>Our research suggests that people who are often accused of being <a href="https://blogs.scientificamerican.com/guest-blog/who-are-you-calling-anti-science/">“anti-science” are not necessarily as unscientific</a> as one might think. The rapid spread of this story is likely related to the desire, <a href="http://www.huffingtonpost.com/bob-burnett/the-birth-of-the-stupid-p_b_10127988.html">unfortunately prominent among many liberals</a>, to see and label other people as ignorant.</p>
<p>Studies suggest we are <a href="https://www.ncbi.nlm.nih.gov/pubmed/28557511">more likely to accept new information when it confirms</a> what we already want to believe. In this case, the chocolate milk statistic fits well with the notion that Americans are fools, so it’s accepted and republished widely despite the numerous red flags that should give scientifically minded people pause.</p>
<p>But the fact remains that many reporters and news outlets decided to run the story without having seen the original results, instead citing one another’s reporting. This led to some interesting challenges when trying to fact-check the survey: <a href="https://www.washingtonpost.com/news/wonk/wp/2017/06/15/seven-percent-of-americans-think-chocolate-milk-comes-from-brown-cows-and-thats-not-even-the-scary-part/">The Washington Post</a> links to <a href="http://www.foodandwine.com/news/survey-finds-too-many-people-still-think-chocolate-milk-comes-brown-cows">Food & Wine’s</a> coverage, which linked to the <a href="https://dairygood.org/undeniably-dairy">Innovation Center’s website</a>, which originally publicized the survey results. The Innovation Center, in turn, links to a story on <a href="http://www.today.com/food/does-chocolate-milk-comes-brown-cows-t112772">Today.com</a>, which linked right back to the Food & Wine article. This type of circular reporting without seeking out the original source can lead to the spread of misinformation. Unfortunately, as news stories quickly pop up and go viral online, it’s all too likely that we will continue to see such problems in the future. </p>
<p>Importantly, none of this disproves the notion that some adults believe chocolate milk comes from brown cows. It certainly does nothing to undermine the need for increased science education in the United States or suggests that a better understanding of our food production system wouldn’t be beneficial to society. All of these points are still valid. Likewise, this isn’t necessarily evidence that the survey itself is flawed. As McComb notes, the survey is not a scientific one and isn’t meant to be taken as evidence of Americans’ knowledge (or lack thereof) of dairy products. The problem is that it’s being reported on as though it is.</p>
<p>So this survey did point out a lack of science understanding. Ironically, rather than showing Americans’ ignorance of chocolate milk’s origins, the fact that media coverage of this survey was reported so widely and with so few caveats instead showed that many people are not skeptical of the science they read.</p><img src="https://counter.theconversation.com/content/80178/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Millions of Americans believe brown cows produce chocolate milk? The way the media reported this factoid raises questions about science literacy – but different ones than you may think.Lauren Griffin, Director of External Research for frank, College of Journalism and Communications, University of FloridaTroy Campbell, Assistant Professor of Marketing, University of OregonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768512017-05-30T01:49:32Z2017-05-30T01:49:32ZResearch transparency: 5 questions about open science answered<figure><img src="https://images.theconversation.com/files/171204/original/file-20170526-6389-1eepgnq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Opening up data and materials helps with research transparency.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/book-wisdom-life-read-magic-background-515241850">REDPIXEL.PL via Shutterstock.com</a></span></figcaption></figure><p><strong>What is “open science”?</strong></p>
<p><a href="https://osf.io/preprints/psyarxiv/ak6jr">Open science</a> is a set of practices designed to make scientific processes and results more transparent and accessible to people outside the research team. It includes making complete research materials, data and lab procedures freely available online to anyone. Many scientists are also proponents of <a href="https://sparcopen.org/open-access/">open access</a>, a parallel movement involving making research articles available to read without a subscription or access fee.</p>
<p><strong>Why are researchers interested in open science? What problems does it aim to address?</strong></p>
<p>Recent research finds that many published scientific findings might not be reliable. For example, researchers have reported being able to replicate <a href="https://elife.elifesciences.org/collections/reproducibility-project-cancer-biology">only 40 percent</a> <a href="https://doi.org/10.1038/nrd3439-c1">or less</a> of <a href="http://www.nature.com/nature/journal/v483/n7391/full/483531a.html">cancer biology results</a>, and a large-scale <a href="https://doi.org/10.1126/science.aac4716">attempt to replicate 100 recent psychology studies</a> successfully reproduced fewer than half of the original results.</p>
<p>This has come to be called a “<a href="https://theconversation.com/we-found-only-one-third-of-published-psychology-research-is-reliable-now-what-46596">reproducibility crisis</a>.” It’s pushed many scientists to look for ways to improve their research practices and increase study reliability. Practicing open science is one way to do so. When scientists share their underlying materials and data, other scientists can more easily evaluate and attempt to replicate them.</p>
<p>Also, open science can help speed scientific discovery. When scientists share their materials and data, others can use and analyze them in new ways, potentially leading to new discoveries. Some journals are specifically dedicated to publishing data sets for reuse (<a href="https://www.nature.com/sdata/">Scientific Data</a>; <a href="http://openpsychologydata.metajnl.com/">Journal of Open Psychology Data</a>). <a href="http://doi.org/10.5334/jopd.ac">A paper in the latter</a> has already been cited 17 times in under three years – nearly all these citations represent new discoveries, sometimes on topics unrelated to the original research.</p>
<p><strong>Wait – open science sounds just like the way I learned in school that science works. How can this be new?</strong></p>
<p>Under the status quo, science is shared through a single vehicle: Researchers publish journal articles summarizing their studies’ methods and results. The key word here is summary; to write a clear and succinct article, important details may be omitted. Journal articles are vetted via the peer review process, in which an editor and a few experts assess them for quality before publication. But – perhaps surprisingly – the primary data and materials underlying the article are almost never reviewed. </p>
<p>Historically, this made some sense because journal pages were limited, and storing and sharing materials and data were difficult. But with computers and the internet, it’s much easier to practice open science. It’s now feasible to store large quantities of information on personal computers, and <a href="https://www.nature.com/sdata/policies/repositories">online repositories to share study materials and data</a> are becoming more common. Recently, some journals have even begun to <a href="http://journals.plos.org/plosone/s/data-availability">require</a> or <a href="https://osf.io/tvyxz/wiki/5.%20Adoptions%20and%20Endorsements/">reward</a> <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456">open science practices</a> like publicly posting materials and data.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Open science makes sharing data the default.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/client-passing-documentation-binders-his-partner-330663044">Bacho via Shutterstock.com</a></span>
</figcaption>
</figure>
<p>There are still some difficulties sharing extremely large data sets and physical materials (such as the specific liquid solutions a chemist might use), and some scientists might have good reasons to keep some information private (for instance, trade secrets or study participants’ personal information). But as time passes, more and more scientists will likely practice open science. And, in turn, science will improve.</p>
<p>Some do view the open science movement as a return to science’s core values. Most researchers over time have <a href="https://doi.org/10.1525/jer.2007.2.4.3">valued transparency</a> as a key ingredient in evaluating the truth of a claim. Now with technology’s help it is much easier to share everything.</p>
<p><strong>Why isn’t open science the default? What incentives work against open science practices?</strong></p>
<p>Two major forces work against adoption of open science practices: habits and reward structures. First, most established researchers have been practicing closed science for years, even decades, and changing these old habits requires some upfront time and effort. <a href="https://osf.io">Technology</a> is helping speed this process of adopting open habits, but behavioral change is hard. </p>
<p>Second, scientists, like other humans, tend to repeat behaviors that are rewarded and avoid those that are punished. Journal editors have tended to favor publishing papers that tell a tidy story with perfectly clear results. This has led researchers to craft their papers to be free from blemish, omitting “failed” studies that don’t clearly support their theories. But real data are often messy, so being fully transparent can open up researchers to critique. </p>
<p>Additionally, some researchers are afraid of being “scooped” – they worry someone will steal their idea and publish first. Or they fear that others will <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">unfairly benefit</a> from using shared data or materials without putting in as much effort. </p>
<p>Taken together, some researchers worry they will be punished for their openness and are skeptical that the perceived increase in workload that comes with adopting open science habits is needed and worthwhile. We believe scientists must continue to <a href="https://osf.io/tvyxz/">develop systems</a> to <a href="http://www.ourdigitalmags.com/publication/?i=365522&article_id=2657445&view=articleBrowser&ver=html5#%7B%22issue_id%22:365522,%22view%22:%22articleBrowser%22,%22article_id%22:%222657445%22%7D">allay fears</a> and reward openness. </p>
<p><strong>I’m not a scientist; why should I care?</strong></p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=466&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=466&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=466&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=585&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=585&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=585&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Open access is the cousin to open science – the idea is that research should be freely available to all, not hidden behind paywalls.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/34070876@N08/3602393341">h_pampel</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Science benefits everyone. If you’re reading this article now on a computer, or have ever benefited from an antibiotic, or kicked a bad habit following a psychologist’s advice, then you are a consumer of science. Open science (and its cousin, open access) means that anyone – including teachers, policymakers, journalists and other nonscientists – can access and evaluate study information.</p>
<p>Considering automatic enrollment in a 401k at work or whether to have that elective screening procedure at the doctor? Want to ensure your tax dollars are spent on policies and programs that actually work? Access to high-quality research evidence matters to you. Open materials and open data facilitate reuse of scientific products, increasing the value of every tax dollar invested. Improving science’s reliability and speed benefits us all.</p><img src="https://counter.theconversation.com/content/76851/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Elizabeth Gilbert supports the Society for the Improvement of Psychological Science and has published on replication efforts as part of the Open Science Collaboration. Along with Katherine Corker and Barbara Spellman, she has a chapter called "Open Science: What, why, how" forthcoming in the Stevens Handbook of Experimental Psychology and Cognitive Neuroscience.</span></em></p><p class="fine-print"><em><span>Katie Corker is on the executive board for the Society for the Improvement of Psychological Science (improvingpsych.org) and an ambassador for the Center for Open Science (cos.io). She is also an editorial board member for Scientific Data. All of these roles are pro bono.</span></em></p>Partly in response to the so-called ‘reproducibility crisis’ in science, researchers are embracing a set of practices that aim to make the whole endeavor more transparent, more reliable – and better.Elizabeth Gilbert, Postdoctoral Research Fellow in Psychiatry and Behavioral Sciences, Medical University of South CarolinaKatie Corker, Assistant Professor of Psychology, Grand Valley State University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/757422017-05-02T02:35:30Z2017-05-02T02:35:30ZHow to boil down a pile of diverse research papers into one cohesive picture<figure><img src="https://images.theconversation.com/files/167402/original/file-20170501-17304-nalnmm.jpg?ixlib=rb-1.1.0&rect=0%2C58%2C2114%2C1411&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Can an algorithmic method for analyzing published research help zero in on reality?</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/shelves-old-scientific-journals-202908463">Sergei25/Shutterstock.com</a></span></figcaption></figure><p>From social to natural and applied sciences, overall scientific output has been growing worldwide – it <a href="http://blogs.nature.com/news/2014/05/global-scientific-output-doubles-every-nine-years.html">doubles every nine years</a>.</p>
<p>Traditionally, researchers solve a problem by conducting new experiments. With the ever-growing body of scientific literature, though, it is becoming more common to make a discovery based on the vast number of already-published journal articles. Researchers synthesize the findings from previous studies to develop a more complete understanding of a phenomenon. Making sense of this explosion of studies is critical for scientists not only to build on previous work but also to push research fields forward.</p>
<p>My colleagues <a href="http://mitsloan.mit.edu/faculty-and-research/faculty-directory/detail/?id=3547">Hazhir Rahmandad</a> and <a href="https://pwp.gatech.edu/kamran-paynabar/">Kamran Paynabar</a> and I have developed a new, more robust way to pull together all the prior research on a particular topic. In a five-year joint <a href="http://jalali.mit.edu/gma">project</a> between MIT and Georgia Tech, we worked to create a new technique for research aggregation. Our recently published paper in PLOS ONE introduces a flexible method that <a href="http://dx.doi.org/10.1371/journal.pone.0175111">helps synthesize findings from prior studies</a>, even potentially those with diverse methods and diverging results. We call it <a href="https://en.wikipedia.org/wiki/Generalized_model_aggregation">generalized model aggregation</a>, or GMA.</p>
<h2>Pulling it all together</h2>
<p><a href="http://researchguides.ebling.library.wisc.edu/c.php?g=293229&p=1953452">Narrative reviews</a> of the literature have long been a key component of scientific publications. The need for more comprehensive approaches has led to the emergence of two other very useful methods: <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3024725/">systematic review and meta-analysis</a>. </p>
<p>In a systematic review, an author finds and critiques all prior studies around a similar research question. The idea is to bring a reader up to speed on the current state of affairs around a particular research topic.</p>
<p>In a meta-analysis, researchers go one step further and synthesize the findings quantitatively. Essentially, it takes a weighted average of the findings of several studies on one topic. Pooling results from multiple studies is meant to generate a more reliable finding than that of any single study. This is crucially helpful when prior studies reported diverging findings and conclusions. And the rise in the publications of meta-analysis has shot up over the last decade, underscoring their importance across research communities.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=354&fit=crop&dpr=1 600w, https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=354&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=354&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=444&fit=crop&dpr=1 754w, https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=444&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/163958/original/image-20170404-5725-g8zkku.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=444&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Publications of meta-analyses are on the rise, based on Web of Science search results for articles that included the term ‘meta-analysis’ in their title.</span>
<span class="attribution"><span class="source">Mohammad S. Jalali</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Meta-analysis has been helpful in increasing our understanding of many scientific problems. But it has some challenges. <a href="https://us.sagepub.com/en-us/nam/methods-of-meta-analysis/book240589">A typical meta-analysis</a> combines just one explanatory variable (that is, a treatment controlled by the experimenter) and one response variable (for instance, a health outcome). Also, a researcher has to be very careful not to lump apples and oranges together in the meta-analysis. She must be selective and make sure to include only previous work that shared a very similar study design.</p>
<p>Here is where our simple and flexible generalized model aggregation method comes in. Using GMA, the prior studies do not necessarily need to have the same study design or method. They can also have different explanatory variables. As long as they are all answering a similar research question, GMA can synthesize them.</p>
<h2>Pooling findings from across a field</h2>
<p>Consider an example from the health literature. Obesity and nutrition researchers need reliable equations that estimate basal metabolic rate (BMR) – the amount of energy the human body spends at complete rest. Understanding BMR has big implications for real-world questions of weight management.</p>
<p>Researchers often estimate BMR as a function of different attributes: age, height, weight, fat mass and fat-free mass. The challenge is that current publications in research journals <a href="https://doi.org/10.1038/ijo.2012.218">provide over 200 such equations</a> estimated for different samples and age groups. These equations also include different subsets of those attributes.</p>
<p>For example, one of these equations included weight and age, but another included only fat-free mass. Another equation considered the impact of all these attributes, but the sample size was too small to make it reliable. More interestingly, and confusingly, there have been several studies with similar samples and variables but they have reported very different equations to explain the relationships.</p>
<p>So which equations are you going to choose to accurately estimate BMR? How do you ensure that your selected equation is more reliable than the rest? </p>
<p>In order to address these questions, <a href="http://journals.plos.org/plosone/article/file?type=supplementary&id=info:doi/10.1371/journal.pone.0175111.s001">we identified 27 published BMR equations</a> for white males from published studies. Then we used GMA to aggregate them into a single equation, which we called a meta-model.</p>
<p>Through validation tests, we showed that our meta-model is more precise than any of the prior equations for estimating BMR. It also can deal with a logarithmic relationship between two variables – something not captured by any of the original 27 linear equations.</p>
<p>We tested our method by putting it up against more complex situations. What if all the equations we aggregate using GMA are actually off the mark? Would GMA still get close to what is really going on?</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=324&fit=crop&dpr=1 600w, https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=324&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=324&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=407&fit=crop&dpr=1 754w, https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=407&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/164555/original/image-20170408-29386-lwhnmr.PNG?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=407&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">The meta-model (on the right) relies only on reported information from the two incorrect models in the middle – not their observed data or the true data. And it is much closer to reality (on the left) than either incorrect model.</span>
<span class="attribution"><a class="source" href="https://doi.org/10.1371/journal.pone.0175111">Rahmandad et al, DOI: 10.1371/journal.pone.0175111</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>To investigate, we imagined two researchers coming up with two different linear equations to describe what they did not realize is actually a nonlinear phenomenon. The findings of the two researchers are far from reality. But again, our meta-model provided an extremely close estimate of reality – even when aggregating these two incorrect and biased models.</p>
<h2>How GMA gets at the truth</h2>
<p>So how does it all work? There is no magic here. In fact, the <a href="https://en.wikipedia.org/wiki/Generalized_model_aggregation">intuition behind GMA is simple</a>, which lets researchers with no extensive statistical background use it. </p>
<p>Broadly, each previous empirical study is an attempt to estimate an underlying reality. Let’s call this the “true model.” And it is unknown to us; whatever is actually driving the phenomenon under investigation is nature’s secret. The empirical studies report relevant information about the true model, even if they are biased or incomplete. </p>
<p>Generalized model aggregation uses computer simulations to replicate prior studies. This time, though, the simulated studies attempt to estimate a meta-model instead of the true model (that is, reality). </p>
<p>We feed the empirical studies’ reported estimates into the simulation. The flexibility of the GMA allows us to also use any other additional information about the underlying true model, too – such as the relationships among the variables or the quality of empirical studies’ estimates. This extra information helps increase the reliability of GMA estimates.</p>
<p>The GMA algorithm carefully applies the same sample characteristics to each previous study and replicates their same method. Then it compares the outcomes of the simulated studies with the actual results of the empirical studies, trying to find the closest match. Through this matching process, GMA estimates the meta-model.</p>
<p>If the simulated and actual outputs match, the meta-model may be a good representation of the true model – that is, by running a bunch of studies through the GMA algorithm, we are able to tease out a closer approximation of how the phenomenon in question actually works. </p>
<h2>Wide range of applications for GMA</h2>
<p>In our paper, we <a href="http://dx.doi.org/10.1371/journal.pone.0175111">discussed a wide range of examples</a>, from health to climate change and environmental sciences, that can benefit from generalized model aggregation. Using GMA to synthesize prior findings into a coherent meta-model can increase the accuracy of aggregation. </p>
<p>In the current replicability crisis, GMA can help not only identify studies that are reproducible, but also distinguish reliable findings from less robust ones. </p>
<p>We reported <a href="http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0175111#pone.0175111.s001">all the steps of our analysis</a> for further replication. A recipe for using GMA and its codes, along with instructions, is also <a href="http://jalali.mit.edu/gma">publicly available</a>.</p>
<p>We hope that GMA can extend the reach of current research synthesis efforts to many new problems. GMA can help us understand the bigger picture of phenomena by aggregating their parts. Consider a puzzle with its pieces scattered about; the overall picture is revealed only when the pieces have been put together.</p><img src="https://counter.theconversation.com/content/75742/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Mohammad S. Jalali does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Researchers need to be able to draw conclusions based on previously published studies in their field. A new aggregation method synthesizes prior findings and may help reveal more of the big picture.Mohammad S. Jalali, Research Faculty, MIT Sloan School of ManagementLicensed as Creative Commons – attribution, no derivatives.