tag:theconversation.com,2011:/global/topics/citations-2235/articlesCitations – The Conversation2023-09-28T23:50:21Ztag:theconversation.com,2011:article/2136882023-09-28T23:50:21Z2023-09-28T23:50:21ZJust 3 Nobel Prizes cover all of science – how research is done today poses a challenge for these prestigious awards<figure><img src="https://images.theconversation.com/files/550986/original/file-20230928-27-5ki0mu.jpg?ixlib=rb-1.1.0&rect=235%2C0%2C3660%2C2832&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Has the Nobel Prize category 'chemistry' morphed into 'biochemistry'?</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/october-2021-north-rhine-westphalia-mülheim-an-der-ruhr-the-news-photo/1235729277">picture alliance via Getty Images</a></span></figcaption></figure><p>I’ve been primarily an experimental chemist – the kind of person who goes into the laboratory and mixes and stirs chemicals – since the beginning of my career in 1965. Today, and for the past 15 years, I’m a full-time <a href="https://chemistry.richmond.edu/faculty/jseeman/">historian of chemistry</a>.</p>
<p>Every October, when the announcements are made of <a href="https://www.nobelprize.org/prizes/lists/all-nobel-prizes-in-chemistry/">that year’s Nobel laureates</a>, I examine the results as a chemist. And all too often, I share the same response as many of my fellow chemists: “Who are they? And what did they do?”</p>
<p>One reason for that bewilderment – and disappointment – is that in many recent years, none of my “favorites” or those of my fellow chemists will travel to Stockholm. I am not suggesting that <a href="https://www.chemistryworld.com/nobel-prize/the-data-behind-the-nobel-prizes/4010453.article">these Nobel laureates</a> are undeserving – quite the opposite. Rather, I am questioning whether some of these awards belong within the discipline of chemistry.</p>
<p>Consider some recent Nobel Prizes. In 2020, Emmanuelle Charpentier and Jennifer A. Doudna received the Nobel Prize “<a href="https://www.nobelprize.org/prizes/chemistry/2020/summary/">for the development of a method for genome editing</a>.” In 2018, Frances H. Arnold received the Nobel Prize “<a href="https://www.nobelprize.org/prizes/chemistry/2018/arnold/facts/">for the directed evolution of enzymes</a>,” which she shared with George P. Smith and Sir Gregory P. Winter “<a href="https://www.nobelprize.org/prizes/chemistry/2018/press-release/">for the phage display of peptides and antibodies</a>.” In 2015, Tomas Lindahl, Paul Modrich and Aziz Sancar received the Nobel Prize “<a href="https://www.nobelprize.org/prizes/chemistry/2015/summary/">for mechanistic studies of DNA repair</a>.”</p>
<p>All of them received Nobel Prizes in chemistry – not the Nobel Prize in <a href="https://www.nobelprize.org/prizes/lists/all-nobel-laureates-in-physiology-or-medicine/">physiology or medicine</a>, even though these achievements seem very clearly situated within the disciplines of medicine and the life sciences. There are many other similar examples.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="woman and man in formal dress at awards ceremony" src="https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/550989/original/file-20230928-23-jju93g.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">2018 co-laureate Frances Arnold receives her Nobel Prize in chemistry from King Carl XVI Gustaf of Sweden.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/co-laureate-of-the-2018-nobel-prize-in-chemistry-us-news-photo/1071162052">Henrik Montgomery/AFP via Getty Images</a></span>
</figcaption>
</figure>
<p>These recent mismatches are even clearer when you look further back in time. Consider the 1962 Nobel Prize awarded to Francis Crick, James Watson and Maurice Wilkins “for their <a href="https://www.nobelprize.org/prizes/medicine/1962/summary/">discoveries concerning the molecular structure of nucleic acids</a> and its significance for information transfer in living material.” <a href="https://www.britannica.com/science/DNA">DNA</a>, of course, is the most famous nucleic acid, and these three scientists were honored for deciphering how its atoms are bonded together and arranged in their three-dimensional double-helix shape.</p>
<p>While the “structure of DNA” most certainly is an achievement in chemistry, the Nobel Assembly at the Karolinska Institute in Stockholm awarded the Nobel Prize in physiology or medicine to Watson, Crick and Wilkins. Clearly, their Nobel achievements have had great consequences in the life sciences, genetics and medicine. Thus awarding them the Nobel Prize for physiology or medicine is quite appropriate.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="metal model of structure of DNA molecule double helix" src="https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=747&fit=crop&dpr=1 600w, https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=747&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=747&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=939&fit=crop&dpr=1 754w, https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=939&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/550991/original/file-20230928-23-vr2kf8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=939&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">A model of a DNA molecule using some of Watson and Crick’s original metal plates.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/this-reconstruction-of-the-double-helix-model-of-dna-news-photo/90739243">Science & Society Picture Library via Getty Images</a></span>
</figcaption>
</figure>
<p>But note the disconnect. The Nobel Prizes in chemistry in 2020, 2018 and 2015 are more life-science- and medicine-oriented than Watson, Crick and Wilkins’ for the structure of DNA. Yet the former were awarded in chemistry, while the latter was in physiology and medicine.</p>
<p>What is going on? What does this trend reveal about the Nobel Foundation and its award strategies in response to the growth of science?</p>
<h2>A gradual evolution in the Nobel Prizes</h2>
<p>Several years ago, chemist-historian-applied mathematician <a href="https://scholar.google.com/citations?user=b4CW5CEAAAAJ&hl=en&oi=ao">Guillermo Restrepo</a> and I collaborated to study the relationship of scientific discipline to the Nobel Prize.</p>
<p>Each year, the Nobel Committee for chemistry <a href="https://www.nobelprize.org/nomination/archive/search.php">studies the nominations</a> <a href="https://chemistry-europe.onlinelibrary.wiley.com/doi/abs/10.1002/chem.202203985">and proposes the recipients</a> of the Nobel Prize in chemistry to its parent organization, the Royal Swedish Academy of Sciences, which ultimately selects the Nobel laureates in chemistry (and physics).</p>
<p>We found a strong correlation between the disciplines of the members of the committee and the disciplines of the awardees themselves. Over the lifetime of the Nobel Prizes, there has been a continuous increase – from about 10% in the 1910s to 50% into the 2000s – in the percentage of committee members whose research is best identified within the life sciences.</p>
<p><a href="https://doi.org/10.1002/anie.201906266">Restrepo and I concluded</a>: As go the expertise, interests and the disciplines of the committee members, so go the disciplines honored by the Nobel Prizes in chemistry. We also concluded that the academy has intentionally included more and more life scientists on their selection committee for chemistry.</p>
<p>Now some perceptive readers might ask, “Is not the discipline of biochemistry just a subdiscipline of chemistry?” The underlying question is, “How does one define the disciplines in science?”</p>
<p>Restrepo and I reasoned that what we term “intellectual territory” defines the boundaries of a discipline. Intellectual territory can be assessed by bibliographic analysis of the scientific literature. We examined the references, often called citations, that are found in scientific publications. These references are where authors of journal articles cite the related research that’s previously been published – often the research they have relied and built on. <a href="https://doi.org/10.1002/anie.201906266">We chose to study two journals</a>: a chemistry journal named Angewandte Chemie and a life science journal named, rather aptly, Biochemistry.</p>
<p>We found that the articles in Angewandte Chemie mostly cite articles published in other chemistry journals, and the articles in Biochemistry mostly cite articles in biochemistry and life sciences journals. We also found that the reverse is true: Scientific publications that cite Angewandte Chemie articles are mostly in chemistry journals, and publications that cite Biochemistry articles are mostly in biochemistry and life science journals. In other words, chemistry and the life sciences/biochemistry reside in vastly different intellectual territories that don’t tend to overlap much.</p>
<h2>Not letting labels be limiting</h2>
<p>But now, perhaps a shocker. Many scientists don’t really care how they are classified by others. Scientists care about science.</p>
<p>As I’ve heard Dudley Herschbach, recipient of the <a href="https://www.nobelprize.org/prizes/chemistry/1986/summary/">1986 Nobel Prize in chemistry</a>, respond to the oft-asked question of whether he’s an experimental chemist or a theoretical chemist: “The molecules don’t know, nor do they care, do they?”</p>
<p>But scientists, like all human beings, do care about recognition and awards. And so, chemists do mind that the Nobel Prize in chemistry has morphed into the Nobel Prize in chemistry and the life sciences.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="black and white head shot of man in early 20th C attire" src="https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=831&fit=crop&dpr=1 600w, https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=831&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=831&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1044&fit=crop&dpr=1 754w, https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1044&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/550994/original/file-20230928-19-rbogdx.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1044&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Jacobus Henricus van ‘t Hoff received the first Nobel Prize in chemistry for 'discovery of the laws of chemical dynamics and osmotic pressure in solutions.’</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/news-photo/jacobus-henricus-vant-hoff-dutch-chemist-nobel-prize-for-news-photo/629452997">Universal History Archive/Universal Images Group via Getty Images</a></span>
</figcaption>
</figure>
<p>Since the Nobel Prizes were first awarded in 1901, the community of scientists and the number of scientific disciplines have grown tremendously. Even today, new disciplines are being created. New journals are appearing. Science is becoming more multidisciplinary and interdisciplinary. Even chemistry as a discipline has grown dramatically, pushing outward its own scholarly boundaries, and chemistry’s achievements continue to be astounding.</p>
<p><a href="https://cen.acs.org/people/nobel-prize/biochemists-life-scientists-winning-Nobel/97/web/2019/12">The Nobel Prize hasn’t evolved sufficiently with the times</a>. And there just are not enough Nobel Prizes to go around to all the deserving.</p>
<p>I can imagine an additional Nobel Prize for the life sciences. The number of awardees could expand from the current three-per-prize maximum to whatever fits the accomplishment. Nobel Prizes <a href="https://www.nobelprize.org/prizes/facts/nobel-prize-facts/">could be awarded posthumously</a> to make up for past serious omissions, an option that was used by the Nobel Foundation for several years and then discontinued.</p>
<p>In truth, the Nobel Foundation has evolved the prizes, but very deliberately and without the major transformations that I think will certainly be required in the future. It will, I believe, eventually break free, figuratively and literally, from the mire of Alfred Nobel’s will and more than a century of distinguished tradition.</p>
<p><a href="https://www.nobelprize.org/alfred-nobel/alfred-nobels-will/">When Nobel designed the prizes</a> named after him in the late 1800s and early 1900s, he couldn’t have known that his gift would become a perpetual endowment and have such lasting – indeed, even increasing – significance. Nobel also could not have anticipated the growth of science, nor the fact that over time, some disciplines would fade in importance and new disciplines would evolve.</p>
<p>So far, the extremely competent and highly dedicated scholars at the Nobel Foundation and their partner organizations – and I acknowledge with real appreciation their selfless devotion to the cause – haven’t responded adequately to the growth of the sciences or to the inequities and even incompleteness of past award years. But I have confidence: In time, they will do so.</p><img src="https://counter.theconversation.com/content/213688/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Jeffrey I. Seeman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The Nobel Prize categories were set up more than a century ago. Since then, science has grown and evolved in unpredictable ways.Jeffrey I. Seeman, Visiting Research Scholar in Chemistry, University of RichmondLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1920802023-01-10T13:30:47Z2023-01-10T13:30:47ZChina now publishes more high-quality science than any other nation – should the US be worried?<figure><img src="https://images.theconversation.com/files/503136/original/file-20230104-18-tav51z.jpg?ixlib=rb-1.1.0&rect=131%2C143%2C6347%2C4347&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">In 2022, Chinese researchers published more scientific papers on artificial intelligence than any other nation.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/artificial-intelligence-concept-brain-and-cpu-with-royalty-free-image/1322017261?phrase=china%20flag%20science&adppopup=true">Mf3D/E+ via Getty Images</a></span></figcaption></figure><p>By at least one measure, China now <a href="https://www.science.org/content/article/china-rises-first-place-most-cited-papers">leads the world in producing high-quality science</a>. My research shows that Chinese scholars now publish <a href="https://doi.org/10.1007/s11192-022-04291-z">a larger fraction of the top 1% most cited scientific papers</a> globally than scientists from any other country. </p>
<p>I am a <a href="https://glenn.osu.edu/caroline-s-wagner">policy expert and analyst</a> who studies how <a href="https://scholar.google.com/citations?user=OBu0OHEAAAAJ&hl=en&oi=ao">governmental investment in science, technology and innovation</a> improves social welfare. While a country’s scientific prowess is somewhat difficult to quantify, I’d argue that the amount of money spent on scientific research, the number of scholarly papers published and the quality of those papers are good stand-in measures.</p>
<p>China is not the only nation to drastically improve its science capacity in recent years, but China’s rise has been particularly dramatic. This has left U.S. <a href="https://carnegieendowment.org/2022/04/25/maintaining-military-edge-over-china-pub-86901">policy experts and government officials worried</a> about how China’s scientific supremacy will <a href="https://www.brookings.edu/research/china-and-the-challenge-to-global-order/">shift the global balance of power</a>. China’s recent ascendancy results from years of governmental policy aiming to be tops in science and technology. The country has taken explicit steps to get where it is today, and the U.S. now has a choice to make about how to respond to a scientifically competitive China.</p>
<h2>Growth across decades</h2>
<p>In 1977, Chinese leader Deng Xiaoping introduced the <a href="https://en.wikipedia.org/wiki/Four_Modernizations">Four Modernizations</a>, one of which was strengthening China’s science sector and technological progress. As recently as 2000, the <a href="https://doi.org/10.1007/s11024-015-9273-6">U.S. produced many times the number of scientific papers as China</a> annually. However, over the past three decades or so, China has invested funds to grow domestic research capabilities, to send students and researchers abroad to study, and to encourage Chinese businesses to shift to manufacturing high-tech products. </p>
<p>Since 2000, China has sent an estimated <a href="http://www.xinhuanet.com/english/2018-03/30/c_137077465.htm">5.2 million students and scholars to study abroad</a>. The majority of them studied science or engineering. Many of these students remained where they studied, but an increasing number <a href="https://doi.org/10.1093/scipol/scz056">return to China</a> to work in well-resourced laboratories and high-tech companies.</p>
<p>Today, China is second only to the U.S. in how much it <a href="https://doi.org/10.1126/science.abe5456">spends on science and technology</a>. Chinese universities now produce the largest <a href="https://www.universityworldnews.com/post.php?story=20210910110221730">number of engineering Ph.D.s</a> in the world, and the quality of Chinese universities has <a href="https://www.shanghairanking.com/">dramatically improved in recent years</a>.</p>
<p><iframe id="ODHkO" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/ODHkO/1/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<h2>Producing more and better science</h2>
<p>Thanks to all this investment and a growing, capable workforce, China’s scientific output – as measured by the number of total published papers – has increased steadily over the years. In 2017, <a href="https://doi.org/10.1038/d41586-018-00927-4">Chinese scholars published more scientific papers</a> than U.S. researchers for the first time.</p>
<p>Quantity does not necessarily mean quality though. For many years, researchers in the West wrote off Chinese research as low quality and often as simply <a href="https://hbr.org/2014/03/why-china-cant-innovate">imitating research from the U.S. and Europe</a>. During the 2000s and 2010s, much of the work coming from China did not receive significant attention from the global scientific community.</p>
<p>But as China has continued to invest in science, I began to wonder whether the explosion in the quantity of research was accompanied by improving quality. </p>
<p>To quantify China’s scientific strength, my colleagues and I looked at citations. A citation is when an academic paper is referenced – or cited – by another paper. We considered that the more times a paper has been cited, the higher quality and more influential the work. Given that logic, the top 1% most cited papers should represent the upper echelon of high-quality science.</p>
<p>My colleagues and I counted how many papers published by a country were in the top 1% of science as measured by the number of citations in various disciplines. Going year by year from 2015 to 2019, we then compared different countries. We were surprised to find that in 2019, Chinese authors <a href="https://www.science.org/content/article/china-rises-first-place-most-cited-papers">published a greater percentage of the most influential papers</a>, with China claiming 8,422 articles in the top category, while the U.S had 7,959 and the European Union had 6,074. In just one recent example, we found that in 2022, Chinese researchers published three times as many papers on artificial intelligence as U.S. researchers; in the top 1% most cited AI research, Chinese papers outnumbered U.S. papers by a 2-to-1 ratio. Similar patterns can be seen with China leading in the top 1% most cited papers in nanoscience, chemistry and transportation.</p>
<p>Our research also found that Chinese research was <a href="https://doi.org/10.1007/s11192-020-03579-2">surprisingly novel and creative</a> – and not simply copying western researchers. To measure this, we looked at the mix of disciplines referenced in scientific papers. The more diverse and varied the referenced research was in a single paper, the more interdisciplinary and novel we considered the work. We found Chinese research to be as innovative as other top performing countries.</p>
<p>Taken together, these measures suggest that China is now <a href="https://www.global-briefing.org/2014/01/the-origins-of-chinas-copycat-culture/">no longer an imitator</a> nor producer of only low-quality science. China is now a scientific power on par with the U.S. and Europe, both in quantity and in quality. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="President Joe Biden surrounded by a number of people sitting at a desk in front of the White House." src="https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/503138/original/file-20230104-105135-py8hfh.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">On August 9, 2022, President Joe Biden signed the CHIPS and Science Act into law to support the growth of U.S. research and technology firms as a way to counter China’s scientific growth.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/whitehouse/52385519067/">The White House/Flickr</a></span>
</figcaption>
</figure>
<h2>Fear or collaboration?</h2>
<p>Scientific capability is intricately tied to both military and economic power. Because of this relationship, many in the U.S. – from <a href="https://www.bis.doc.gov/index.php/documents/about-bis/newsroom/press-releases/3158-2022-10-07-bis-press-release-advanced-computing-and-semiconductor-manufacturing-controls-final/file">politicians</a> to <a href="https://www.csis.org/analysis/choking-chinas-access-future-ai">policy experts</a> – have expressed concern that China’s scientific rise is a threat to the U.S., and the government has taken steps to slow China’s growth. The recent <a href="https://www.amchamchina.org/us-china-agriculture-and-food-partnership/">Chips and Science Act of 2022</a> explicitly limits cooperation with China in some areas of research and manufacturing. In October 2022, the Biden administration put restrictions in place to limit China’s access to <a href="https://www.washingtonpost.com/technology/2022/10/07/china-high-tech-chips-restrictions/">key technologies with military applications</a>.</p>
<p>A number of scholars, including me, see these fears and policy responses as rooted in a nationalistic view that doesn’t wholly map onto the global endeavor of science.</p>
<p>Academic research in the modern world is in large part driven by the exchange of ideas and information. The results are published in publicly available journals that anyone can read. Science is also becoming ever more <a href="http://dx.doi.org/10.1007/s11192-016-2230-9">international and collaborative</a>, with researchers around the world depending on each other to push their fields forward. Recent collaborative research <a href="https://www.nature.com/articles/d41586-022-00570-0">on cancer</a>, <a href="https://doi.org/10.1371/journal.pone.0236307">COVID-19</a> and <a href="https://www.amchamchina.org/us-china-agriculture-and-food-partnership/">agriculture</a> are just a few of many examples. My own work has also shown that when researchers from China and the U.S. collaborate, they produce <a href="https://doi.org/10.1038/550032a">higher quality</a> science than either one alone.</p>
<p>China has joined the ranks of top scientific and technological nations, and some of the concerns over shifts of power are reasonable in my view. But the U.S. can also benefit from China’s scientific rise. With many global issues facing the planet – like <a href="https://www.brookings.edu/blog/planetpolicy/2021/10/28/rebuilding-us-chinese-cooperation-on-climate-change-the-science-and-technology-opportunity/">climate change</a>, to name just one – there may be wisdom in looking at this new situation as not only a threat, but also an opportunity.</p><img src="https://counter.theconversation.com/content/192080/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Caroline Wagner does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>In 2014, Chinese researchers published more papers than any other country for the first time. In 2019, China overtook the U.S. as the No. 1 publisher of the most influential papers.Caroline Wagner, Milton & Roslyn Wolf Chair in International Affairs, The Ohio State UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1862512022-07-18T13:51:19Z2022-07-18T13:51:19ZCiting blogs in academic publications: lessons from urban planning in COVID<figure><img src="https://images.theconversation.com/files/473892/original/file-20220713-18-w1aftx.jpeg?ixlib=rb-1.1.0&rect=30%2C0%2C4059%2C2728&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Blog art. Shutterstock</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-vector/concept-blogging-golden-blog-word-bubble-1146206048">www.shutterstock.com</a></span></figcaption></figure><p>Blogs are a double-edged sword. These online essays can be produced by anyone with access to a computer and the internet. The writers could be well-informed experts with valuable insights to share, or official government agencies. Some bloggers are outstanding academic authors who have published their work in accessible databases, like Scopus, Web of Science and Google Scholar. Others may be poorly informed people simply sharing a biased opinion. </p>
<p>Although readers might be able to comment on a blog, the material doesn’t have the status of <a href="https://www.elsevier.com/reviewers/what-is-peer-review">peer-reviewed</a> academic research. Checks for quality and reliability aren’t built into blog publishing as they are in academic journals. Previous <a href="https://www.igi-global.com/gateway/article/123139">research</a> has discussed the role of blogging communities as reference sources in scientific manuscripts. The <a href="https://www.research-integrity.admin.cam.ac.uk/research-integrity/guidance/citing-blogs-reference-sources">University of Cambridge</a> has warned its researchers not to rely on them.</p>
<p>But some blogs still have appeal as sources of information and ideas for researchers because they often deal with new situations that haven’t yet been covered in the traditional academic literature. </p>
<p>The emergence of COVID was a perfect example of a new and fast-changing situation like this. Vast amounts of information were becoming available online — some of it in blogs that were reliable and useful to the public and to academics, some not. </p>
<p>Researchers and students need to know how to balance their data sources.</p>
<p>Little is known about how to identify reliable blog content in our field of study, urban planning. We decided to <a href="https://www.tandfonline.com/doi/full/10.1080/02697459.2022.2085352">explore this</a>, starting by looking at which kinds of blogs were already being cited by academics, and what <a href="https://www.tandfonline.com/doi/abs/10.1080/02697459.2022.2085352?journalCode=cppr20">criteria</a> they were using to guide their choice of blogs. </p>
<p>We found the blogs cited in academic publications were mostly published by governmental and non-governmental organisations. We analysed the ways in which these blogs had influenced the public dialogue over COVID, and demonstrated that they were founded on unique ideas that had not yet undergone peer assessment. </p>
<p>We also came up with three tips that academics can use for citing blogs in their research. </p>
<h2>Citing blogs about COVID-19</h2>
<p>A lot of academics and researchers of city planning and design turned to blogs during the coronavirus outbreak for information. We conducted a scoping study in 2020, analysing 31 samples from four types of blogging sources cited in 10 publications published in seven journals. We looked at social sciences journals published in 2020 and searched for blogs that were used as references in such articles about COVID-19. </p>
<p>We found that in the year 2020, academics and researchers in urban planning and design used blogs produced by four types of publishers: government agencies, nongovernmental organisations, private groups, and individuals. </p>
<p>Moreover, we found that academics and researchers cited blogs for three reasons:</p>
<ul>
<li><p>collecting quantitative data resulting from statistical analysis</p></li>
<li><p>shedding light on qualitative knowledge related to social solutions like social distancing and lockdown </p></li>
<li><p>confronting the challenges of pandemics through the principles of urban planning.</p></li>
</ul>
<h2>Criteria for citing blogs</h2>
<p>This analysis was part of a wider study of the use of blogs by academics. Based on this work, we have three tips for finding blogs that publish scientific findings on vital topics like COVID-19 and can be cited in scholarly articles. </p>
<ol>
<li><p>It is possible for academics and researchers in urban planning and design to cite blogs in their scholarly works. This is done by selecting posts that provide relevant analysis, results and findings done by government agencies and nongovernmental organisations. </p></li>
<li><p>For blogs written by individuals or private groups such as <a href="https://www.brookings.edu/">Brookings</a> and
<a href="https://www.bloomberg.com/citylab">CityLab</a>, it crucial to keep track of bloggers in scientific databases, like the <a href="https://mjl.clarivate.com/home">Web of Science</a>, <a href="https://www.scopus.com/home.uri">Scopus</a> or <a href="https://scholar.google.com/">Google Scholar</a>. Several metrics can help to understand bloggers’ standing, including the number of citations, <a href="https://blog.scopus.com/posts/the-scopus-h-index-what-s-it-all-about-part-i">h-index</a>, and <a href="https://clarivate.libguides.com/incites_ba/understanding-indicators">normalised citation impact</a>. </p></li>
<li><p>Citing blogs can be based on the number of views or reviews, which can indicate the possibility of an open-ended debate about the post. However, it is important to remember that while blog views can seem important, they are not necessarily a reliable metric to cite blogs. Blog views reflect the importance of the topic rather than the reliability of the information provided in the blog.</p></li>
</ol>
<p>By following these tips, academics and researchers can use blogs as reliable sources of information. They can be cited in scholarly publications for emerging issues such as COVID-19 in its early stages. These tips can guide academics and researchers when they tackle topics still under research or not covered in scientific studies.</p><img src="https://counter.theconversation.com/content/186251/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Abeer Elshater receives funding from Science, Technology & Innovation Funding Authority (STDF). </span></em></p><p class="fine-print"><em><span>Hisham Abusaada does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Blogs can be useful sources of data for the urban planning research community if the researchers know how to assess them critically.Abeer Elshater, Professor, Ain Shams UniversityHisham Abusaada, Professor in Housing and Building National Research Center (HBRC), Cairo, Egypt, Housing and Building National Research CenterLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1788932022-06-22T20:03:42Z2022-06-22T20:03:42ZFemale finance leaders outperform their male peers, so why so few of them in academia and beyond?<figure><img src="https://images.theconversation.com/files/468152/original/file-20220610-29204-7sdug2.jpg?ixlib=rb-1.1.0&rect=291%2C7%2C4315%2C2866&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The gender diversity of thought leadership in finance is lower than in most other academic fields, <a href="https://ssrn.com/abstract=3998218">our research</a> shows. Finance ranks 132nd out of 175 fields with a representation of only 10.3% women among its thought leaders. Yet these women outperform their male peers.</p>
<p>How did we measure this? The impact of an academic’s ideas can be quantified using academic citations – how often their work is referenced in research published by other academics. We consider thought leaders to be academics who have been <a href="https://doi.org/10.1371/journal.pbio.3000918">ranked</a> among the <a href="https://doi.org/10.1371/journal.pbio.3000384">top 2%</a> in their respective fields by citations in the <a href="https://www.elsevier.com/en-au/solutions/scopus">Scopus</a> database. </p>
<p>We found the percentage of female thought leaders in finance is lower than in economics and in the fields of science, technology, engineering and mathematics (STEM).
It’s surprising since finance is a younger field than economics and so might be expected to be less traditionally male-dominated. The field of academic finance was carved out of economics in the early 1940s. </p>
<p>Our evidence on thought leadership is consistent with other evidence that women are less represented in finance academia than in economics. This is true at every level, from incoming PhD students through to full professors. </p>
<p>We see the under-representation of women in finance both among academics and <a href="https://docs.preqin.com/reports/Preqin-Women-in-Private-Equity-February-2019.pdf">more broadly</a>. A <a href="https://www2.deloitte.com/us/en/insights/industry/financial-services/diversity-and-inclusion-in-financial-services-leadership.html/#endnote-sup-4">2020 Deloitte report</a> noted: </p>
<blockquote>
<p>“All but six of 111 CEOs at the 107 largest US public financial institutions (including four with co-CEOs) are men.”</p>
</blockquote>
<iframe title="% female employees at private equity firms by seniority" aria-label="Grouped Column Chart" id="datawrapper-chart-hnhni" src="https://datawrapper.dwcdn.net/hnhni/2/" scrolling="no" frameborder="0" style="border: none;" width="100%" height="400"></iframe>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/gender-equity-the-way-things-are-going-we-wont-reach-true-parity-until-the-22nd-century-112685">Gender equity. The way things are going, we won't reach true parity until the 22nd century</a>
</strong>
</em>
</p>
<hr>
<h2>Why are so few women in finance?</h2>
<p>The fact that finance is less gender-diverse than other maths-intensive fields suggests standard arguments about women’s preferences with respect to STEM subjects cannot explain their low representation in finance. </p>
<p>Country-level culture is also unlikely to explain women’s representation in finance. As our research shows, finance thought leadership is geographically concentrated. Only 20% of finance thought leaders are located outside the USA or UK. </p>
<p>Instead, we argue the culture of academic finance is less welcoming to women than it is to men. We provide two pieces of evidence for this argument. </p>
<p>First, we show that individual female thought leaders in finance have more impact than their male peers, as measured by citations per paper, their academic rank and a composite score of six citation metrics (total citations, <a href="https://theconversation.com/explainer-what-is-an-h-index-and-how-is-it-calculated-41162">H-index</a>, <a href="https://www.sciencedirect.com/science/article/abs/pii/S1751157708000254">Hm-index</a>, citations of single, first and last-authored papers). This finding is especially striking given <a href="https://www.dropbox.com/s/hglddwbrka4s52/Innovative_ideas_women_%20mkoffi.pdf?dl=0">evidence</a> that women’s research is less likely to be cited. Female thought leaders in finance also have relatively more impact than they do in economics or other STEM fields. </p>
<p>These results suggest the obstacles women face in finance are greater than in other fields. The individuals who overcome these barriers outperform their peers. </p>
<p>Second, we show that women’s beliefs about the <a href="https://www.science.org/doi/10.1126/science.1261375">level of innate talent</a> needed to succeed in finance, instead of motivation and effort, are not correlated with women’s representation in finance thought leadership, but men’s beliefs are. These results are consistent with the idea that men’s beliefs represent a greater barrier to equality in thought leadership, role modelling and education in the “masculine” field of finance than in other fields. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1113018473398009858"}"></div></p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/as-women-decide-australias-new-leaders-what-is-going-on-with-academic-leadership-184163">As women decide Australia's new leaders, what is going on with academic leadership?</a>
</strong>
</em>
</p>
<hr>
<h2>Lack of diversity is a handicap</h2>
<p>The finance sector is a bedrock of the world economy. It’s the <a href="https://www.rba.gov.au/education/resources/snapshots/economy-composition-snapshot/">third-largest industry in Australia</a>, accounting for 8% of economic output. The lack of diversity in thought leadership for such an important sector is problematic for several reasons. </p>
<p><a href="https://www.science.org/doi/10.1126/science.1240474">Diversity of thought and innovation are linked</a>. Lack of diversity means the finance industry may be less innovative than it could be. </p>
<p>The finance sector may also be less welcoming to women than it should be. The general public does not always embrace finance despite its importance. Stockmarket participation is low in some countries and demographic groups, as is financial literacy. </p>
<p>Trust in finance might be higher when finance professionals are more similar to members of the general population. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/women-are-dropping-out-of-economics-so-men-are-running-our-economy-74698">Women are dropping out of economics, so men are running our economy</a>
</strong>
</em>
</p>
<hr>
<h2>What can universities do about it?</h2>
<p>Women are also less likely to enter the field of finance after graduating. They make up only <a href="https://www.mgsm.edu.au/__data/assets/pdf_file/0004/622633/WiMBA-FAQs-doc_Jan-31-2018-For-Web.pdf">35%</a> of MBA enrolments in Australia (<a href="http://www.fortefoundation.org/site/DocServer/Forte_Women_s_Enrollment_2021_release.pdf">41%</a> in the USA). The absence of female thought leadership, role models and educators in finance may help explain women’s under-representation in MBA enrolment and in the finance sector.</p>
<p>To overcome the inequality of finance, the culture of finance academia must change. But culture cannot change on demand. </p>
<p>The leadership of academic finance associations and our universities should provide opportunities for introspection, reflection and discussion of these issues. We should start by discussing why academia seems to be focused primarily on producing more science, rather than better science. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/the-push-for-researcher-entrepreneurs-could-be-a-step-backward-for-gender-equity-176536">The push for 'researcher entrepreneurs' could be a step backward for gender equity</a>
</strong>
</em>
</p>
<hr>
<p>We should also acknowledge the role of <a href="https://www.promarket.org/2021/10/15/academic-gatekeepers-political-finance/">gatekeepers</a> and take steps to diminish their influence. Universities, academic associations and journals should increase the transparency of their operations. The process through which positions of power are filled, like those of university deans and journal editors, should be transparent. Opportunities for individuals to exercise their voice without repercussion should be provided. </p>
<p>All these organisations must demonstrate a commitment to unbiased decision-making as a core element of good governance. Only when the rules of the game are clear can there be a hope of changing the rules to level the playing field.</p><img src="https://counter.theconversation.com/content/178893/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>In the past, Renee Adams received funding from various research agencies for other research projects. </span></em></p><p class="fine-print"><em><span>Jing Xu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>A very low percentage of women are leaders in the field of finance. Gender equity will benefit both scholarship and Australia’s third-largest economic sector.Renee Adams, Professor of Finance, University of OxfordJing Xu, Lecturer in Finance, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1818822022-05-05T16:38:02Z2022-05-05T16:38:02ZResearchers should be assessed on quality not quantity: here’s how<figure><img src="https://images.theconversation.com/files/460797/original/file-20220502-21-uy3xxu.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Researchers need to be assessed on every aspect of their work, no matter where it takes place.</span> <span class="attribution"><span class="source">Photo by marlenefrancia/Shutterstock</span></span></figcaption></figure><p>How do you assess academic researchers for promotion or funding? This question has become ever more central in higher education settings since the 1980s saw substantial growth in investment in research. This significantly increased the number of researchers in the academic workforce and the need to assess their output for employment, promotion and other career advancements.</p>
<p>One response to the need to “scale up” researcher assessments was to introduce publication metrics. These are counts of publications and citations and more complex measures like the <a href="https://libguides.lib.uct.ac.za/tracking_your_academic_footprint/h-index">Hirsch Index</a> and the <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4150161/">Impact Factor</a>. These allowed for relatively easy assessment and comparison of researchers’ careers. They were seen to be both more objective and less time consuming than traditional assessments in which narrative bio sketches were peer reviewed subjectively.</p>
<p>But it’s now widely accepted that the metrics approach to assessment can negatively affect the research system and research outputs. It values quantity over quality and creates perverse incentives that easily lead to <a href="http://rdcu.be/mPZT">questionable research practices</a>. Relying too much on metrics has led to researchers engaging in practices that reduce the trust in, and quality of, research. These include “<a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5178044/#">salami slicing</a>” (the spreading of study results over as many publications as possible to ensure numerous publications) and <a href="https://doi.org/10.1017/S0033291718001873">selective reporting</a>. </p>
<p>The pressure to publish also makes researchers vulnerable to predatory journals. Because having many publications and many citations is made so important, the pressure to cut corners is high. This can lead to low quality flawed research that typically overstates effects and downplays limitations. When the findings of that research are implemented harm is done to patients, society or the environment.</p>
<p>Researcher assessment criteria and practices need to be overhauled. We believe the best way to do this is using the <a href="https://wcrif.org/guidance/hong-kong-principles">Hong Kong Principles on Assessing Researchers</a> which emerged from the 6th World Conference on Research Integrity in 2019. The principles were developed to reinforce the need to award researchers for practices that <a href="https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000737">promote trustworthy research</a>. “Trustworthy research” is relevant, valid and is done in a transparent and accountable way without researchers being distracted by other interests.</p>
<p>These principles move beyond merely questioning the use of research metrics for assessment. Instead they offer alternative indicators to assess researchers and reward behaviour. The idea is to foster research integrity and responsible conduct of research. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/why-developing-countries-are-particularly-vulnerable-to-predatory-journals-86704">Why developing countries are particularly vulnerable to predatory journals</a>
</strong>
</em>
</p>
<hr>
<p>We believe they should be widely adopted. But there are gaps that must be addressed to ensure that the principles don’t leave institutions in the global south, including those in Africa, out in the cold.</p>
<h2>A possible way forward</h2>
<p>The Hong Kong Principles and similar initiatives are gaining traction and changing researcher assessment in many countries and institutions worldwide.</p>
<p>The <a href="https://wcrif.org/guidance/hong-kong-principles">principles</a> are:</p>
<ul>
<li><p>Assess researchers on responsible practices from conception to delivery. That includes the development of the research idea, research design, methodology, execution and effective dissemination.</p></li>
<li><p>Value the accurate and transparent reporting of all research, regardless of the results.</p></li>
<li><p>Value the practices of open science (open research) such as open methods, materials and data.</p></li>
<li><p>Value a broad range of research and scholarship, such as replication, innovation, translation and synthesis.</p></li>
<li><p>Value a range of other contributions to responsible research and scholarly activity, such as peer review for grants and publications, mentoring, outreach and knowledge exchange.</p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/how-to-approach-the-revolution-in-scholarly-publishing-116091">How to approach the revolution in scholarly publishing</a>
</strong>
</em>
</p>
<hr>
<p>The principles also include a strong focus on practical implementation, with an understanding that this is not a straightforward process. They call for the sharing of practices around implementation.</p>
<h2>The challenge of implementation</h2>
<p>The movement to change the way researchers are measured should undoubtedly be embraced. But it’s important this be done in a way that doesn’t leave poorly resourced institutions in the global south behind. Even for researchers in the global north, the sorts of new expectations contained in the principles can be frustrating, because they require additional time and resources.</p>
<p>The most obvious example of this is <a href="https://www.nature.com/articles/d41586-022-00724-0">Principle Three</a>: value the practices of open science. A researcher cannot do this alone. They need to be supported by adequate infrastructure, skills, funding, and even discipline-specific training to ensure their data are published in a way that is FAIR (findable, accessible, interoperable and reusable). There are <a href="https://www.nicis.ac.za/dirisa/">some initiatives</a> in Africa to build this kind of infrastructure and skills. But this demand may prove an insurmountable challenge for many African researchers.</p>
<p>African institutions often have a shortage of skilled research management staff to support researchers and ensure their research practices remain in line with international trends. This means researchers from under-resourced institutions may risk losing opportunities as their institutions fail to keep up with changing international demands. </p>
<p>International funding body Wellcome, for instance, <a href="https://wellcome.org/grant-funding/guidance/open-access-guidance/open-access-policy">has stated</a> that all the institutions it funds must publicly commit to responsible and fair research assessment by signing up to the <a href="https://sfdora.org/">San Francisco Declaration on Research Assessment</a>, the <a href="http://www.leidenmanifesto.org/">Leiden Manifesto</a> or an equivalent. Researchers and organisations who do not comply with this policy will be subject to appropriate sanctions. That includes not having new grant applications accepted or their funding being suspended.</p>
<p>African researchers may join international collaborations because they see this as important for their own careers and for accessing the funding needed to unpack important questions within the communities in which they work. Funders and research team leaders from wealthier countries must ensure that the research systems needed to support, realise and adequately acknowledge those from less resourced places are in place. If they are not, capacity development must be funded and implemented as needed. </p>
<h2>A balance</h2>
<p>This issue will be among those tabled at the <a href="https://wcri2022.org/">7th World Conference of Research Integrity</a> in Cape Town, South Africa from 29 May to 1 June. Its theme, Fostering Research Integrity in an Unequal World, offers an ideal opportunity to discuss how best to balance the necessity of changing research assessment practices with the risk to poorer institutions and less resourced researchers. A special symposium will be dedicated to the implementation of the Hong Kong Principles in an African context.</p><img src="https://counter.theconversation.com/content/181882/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Lyn Horn receives funding from US Office of Research Integrity, the South African Department of Science and Innovation and the South African National Research Foundation. This funding is for the 7th World Conference on Research Integrity.
She is currently on the international advisory board, as a research ethics advisor, for four different clinical trials in the field of HIV and TB research. </span></em></p><p class="fine-print"><em><span>Lex Bouter is the chair of the World Conferences on Research Integrity Foundation and one of the cochairs of the 5th, 6th and 7th WCRI . He is also one of the coauthors of the Hong Kong Principles.</span></em></p>The movement to change the way researchers are measured should undoubtedly be embraced.Lyn Horn, Director, Office of Research Integrity, University of Cape TownLex Bouter, Professor of Methodology and Integrity, Vrije Universiteit AmsterdamLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1416842020-07-08T18:52:50Z2020-07-08T18:52:50ZWhy the h-index is a bogus measure of academic impact<figure><img src="https://images.theconversation.com/files/345665/original/file-20200705-33922-1qdq6zc.jpg?ixlib=rb-1.1.0&rect=41%2C0%2C4625%2C3078&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A portrait of Albert Einstein on a transformer station in St.Petersburg, Russia.</span> <span class="attribution"><span class="source">(Shutterstock)</span></span></figcaption></figure><p>Earlier this year, <a href="https://www.nytimes.com/2020/05/12/magazine/didier-raoult-hydroxychloroquine.html">French physician and microbiologist Didier Raoult generated a media uproar over his controversial promotion of hydroxychloroquine to treat COVID-19</a>. The researcher has long pointed to his growing list of publications and high number of citations as an indication of his contribution to science, all summarized in his “h-index.” </p>
<p>The controversy over his recent research presents an opportunity to examine the weaknesses of the h-index, a metric that aims to quantify a researcher’s productivity and impact, used by many organizations to evaluate researchers for promotions or research project funding. </p>
<p>Invented in 2005 by the American physicist Jorge Hirsch, the <a href="https://dx.doi.org/10.1073%2Fpnas.0507655102">Hirsch-index</a> or h-index, is an essential reference for many researchers and managers in the academic world. It is particularly promoted and used in the biomedical sciences, a field where the massive number of publications makes any serious qualitative assessment of researchers’ work almost impossible. This alleged indicator of quality has become a mirror in front of which researchers admire themselves or sneer at the pitiful h-index of their colleagues and rivals.</p>
<p>Although experts in bibliometry — a branch of library and information sciences that uses statistical methods to analyze publications — have quickly pointed out <a href="https://mitpress.mit.edu/books/bibliometrics-and-research-evaluation">the dubious nature of this composite indicator</a>, most researchers do not always seem to understand that its properties make it a far-from-valid index to seriously and ethically assess the quality or scientific impact of publications.</p>
<p>Promoters of the h-index commit an elementary error of logic. They assert that because Nobel Prize winners generally having a high h-index, the measure is a valid indicator of the individual quality of researchers. However, if a high h-index can indeed be associated with a Nobel Prize winner, this in no way proves that a low h-index is necessarily associated with a researcher of poor standing. </p>
<p>Indeed, a seemingly low h-index can hide a high scientific impact, at least if one accepts that the usual unit of measure for scientific visibility is reflected in the number of citations received.</p>
<h2>Limits of the h-index</h2>
<p>Defined as the number of articles <em>N</em> by an author that have each received at least <em>N</em> citations, the h-index is limited by the total number of published articles. For instance, if a person has 20 articles that are each cited 100 times, her h-index is 20 — just like a person who also has 20 articles, but each cited only 20 times. But no serious researcher would say that the two are equal because their h-index is the same.</p>
<p>The most ironic in the history of the h-index is that its inventor wanted to counter the claim that the number of published papers represented a researcher’s impact. So, he included the number of citations the articles received. </p>
<p>But it turns out that an author’s h-index is strongly correlated (up to about 0.9) with his total number of publications. In other words, it is the number of publications that drives the index more than the number of citations, an indicator which remains the best measure of the visibility of scientific publications.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=454&fit=crop&dpr=1 600w, https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=454&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=454&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=571&fit=crop&dpr=1 754w, https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=571&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/345699/original/file-20200706-29-ao7mpj.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=571&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Raoult Didier made front-page news in France for promoting hydroxychloroquine as a remedy for COVID-19.</span>
<span class="attribution"><span class="source">(Shutterstock)</span></span>
</figcaption>
</figure>
<p>All of this is well known to experts in bibliometrics, but perhaps less to researchers, managers and journalists who allow themselves to be impressed by scientists parading their h-index. </p>
<h2>Raoult vs. Einstein</h2>
<p>In a recent investigation into Raoult’s research activities by the French newspaper <em>Médiapart</em>, a researcher who had been a member of the evaluation committee of Raoult’s laboratory said: “<a href="https://www.mediapart.fr/journal/france/070420/chloroquine-pourquoi-le-passe-de-didier-raoult-joue-contre-lui">What struck her was Didier Raoult’s obsession with his publications. A few minutes before the evaluation of his unit began, the first thing he showed her on his computer was his h-index</a>.” Raoult had also said in <em>Le Point</em> magazine in 2015 that “<a href="https://www.lepoint.fr/invites-du-point/didier_raoult/raoult-evaluer-la-recherche-mesurer-ou-tricher-04-10-2015-1970477_445.php">it was necessary to count the number and impact of researchers’ publications to assess the quality of their work</a>.” </p>
<p>So let’s take a look at Raoult’s h-index and see how it compares to, say, that of a researcher who is considered the greatest scientist of the last century: Albert Einstein.</p>
<p>In the <a href="https://www.webofknowledge.com/">Web of Science database</a>, Raoult has 2,053 articles published between 1979 and 2018, having received a total of 72,847 citations. His h-index calculated from these two numbers is 120. We know, however, that the value of this index can be artificially inflated through author self-citations — when an author cites his own previous papers. The database indicates that among the total citations attributed to the articles co-authored by Raoult, 18,145 come from articles of which he is a co-author. These self-citations amount to a total of 25 per cent. Subtracting these, Raoult’s h-index drops 13 per cent to a value of 104.</p>
<p>Now, let’s examine the case of Einstein, who has 147 articles listed in the Web of Science database between 1901 and 1955, the year of his death. For his 147 articles, Einstein has received 1,564 citations during his lifetime. Of this total number of citations, only 27, or a meagre 1.7 per cent, are self-citations. Now, if we add the citations made to his articles after his death, Einstein has received a total of 28,404 citations between 1901 and 2019, which earns him an h-index of 56.</p>
<p>If we have to rely on the so-called “objective” measurement provided by the h-index, we are then forced to conclude that the work of Raoult has twice the scientific impact of that of Einstein, the father of the photon, restricted and general relativities, the Bose-Einstein condensation and of the phenomenon of the stimulated emission at the origin of lasers. </p>
<p>Or maybe is it simpler (and better) to conclude, as already suggested, that this indicator is bogus?</p>
<p>One should note the significant difference in the number of total citations received by each of these researchers during their careers. They have obviously been active at very different times, and the size of scientific communities, and therefore the number of potential citing authors, have grown considerably over the past half-century. </p>
<p>Disciplinary differences and collaboration patterns must also be taken into account. For example, theoretical physics has far fewer contributors than microbiology, and the number of co-authors per article is smaller, which affects the measure of the productivity and impact of researchers.</p>
<p>Finally, it is important to note that the statement: “The h-index of person P is X,” has no meaning, because the value of the index depends on the content of the database used for its calculation. One should rather say: “The h-index of person P is X, in database Z.” Hence, according to the Web of Science database, which only contains journals considered to be serious and fairly visible in the scientific field, the h-index of Raoult is 120. On the other hand, in the free and therefore easily accessible database of Google Scholar, his h-index — the one most often repeated in the media — goes up to 179.</p>
<h2>Number fetishism</h2>
<p>Many scientific communities worship the h-index and this fetishism can have harmful consequences for scientific research. France, for instance, uses a <a href="https://www.sigaps.fr/">Système d’interrogation, de gestion et d’analyse des publications scientifiques</a> to grant research funds to its biomedical science laboratories. It is based on the number of articles they publish in so-called high impact factor journals. As reported by the newspaper <em>Le Parisien</em>, the frantic pace of Raoult’s publications <a href="https://www.leparisien.fr/societe/didier-raoult-une-frenesie-de-publications-et-des-pratiques-en-question-12-06-2020-8334405.php">allows his home institution to earn between 3,600 and 14,400 euros annually for each article published by his team</a>.</p>
<p>Common sense should teach us to be wary of simplistic and one-dimensional indicators. Slowing the maddening pace of scientific publications would certainly lead researchers to lose interest in the h-index. More importantly, abandoning it would contribute to producing scientific papers that will be fewer in number, but certainly more robust.</p>
<p><em>This is a corrected version of a story originally published on July 8, 2020. The earlier story said John Hirsch had invented the h-index instead of Jorge Hirsch.</em></p><img src="https://counter.theconversation.com/content/141684/count.gif" alt="La Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Yves Gingras a reçu des financements du CRSH et du FQRSC. </span></em></p><p class="fine-print"><em><span>Mahdi Khelfaoui a reçu des financements du CRSH</span></em></p>The h-index has become an indicator of quality for many researchers and may influence the allocation of research funds. But some question its value.Yves Gingras, Professeur d’histoire et de sociologie des sciences, Université du Québec à Montréal (UQAM)Mahdi Khelfaoui, Professeur associé, Université du Québec à Montréal (UQAM)Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1402362020-06-18T03:58:45Z2020-06-18T03:58:45ZBeyond the black hole of global university rankings: rediscovering the true value of knowledge and ideas<figure><img src="https://images.theconversation.com/files/342364/original/file-20200617-94044-lgpumc.jpg?ixlib=rb-1.1.0&rect=17%2C0%2C3789%2C2590&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>The recent release of global university <a href="https://www.topuniversities.com/">rankings</a> and the way these are <a href="https://www.stuff.co.nz/waikato-times/news/121785845/the-university-of-waikato-falls-significantly-in-university-global-rankings">reported</a> raises important questions about the role and reputation of our tertiary institutions.</p>
<p>Are universities measured and ranked according to what we really value? Or are they ranked and valued only by what is measured? And are those measures authentic and trusted indicators of quality? </p>
<p>There was a time when no one feared that a university might slip a quality ranking or two in the eyes of the world, the taxpayer, benefactors or students considering domestic or international study. Nowadays, however, universities see no limit to the black hole of global rankings. Its gravitational pull consumes their attention. </p>
<p>While a modern phenomenon, rankings have historical origins. The birth of the modern research-intensive university <a href="https://books.google.co.nz/books?id=sIKVElf-txYC&dq=Academic+Charisma+and+the+Origins+of+the+Research+University+william+clark&source=gbs_navlinks_s">can be traced</a> to Western Europe in 1665 when the first academic journals appeared. In Germany, more than 3,000 journals were published between 1665 and 1790, marking an institutional move from the teaching university to the research university. </p>
<p>Academics were able to share and legitimise their research by publishing in these journals. Students who were called on to write and defend their essays orally could draw on the journals to support their learning.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/university-students-arent-cogs-in-a-market-they-need-more-than-a-narrow-focus-on-skills-140058">University students aren't cogs in a market. They need more than a narrow focus on 'skills'</a>
</strong>
</em>
</p>
<hr>
<h2>There is no one ranking standard</h2>
<p>Today’s journals and the number of citations academics can claim in them are key indicators of a university’s rank and quality. However, when a university has to research and teach in a language other than English, the effect on its ranking can be drastic. </p>
<p>Databases used by the larger university ranking systems, such as Scopus and CSI/SSCI, don’t automatically pick up non-English journals. Opportunities for researchers to gain “ranking points” through peer citations are therefore reduced. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/342362/original/file-20200617-94094-14seo37.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">The University of al-Qarawiyyin in Morocco, the oldest operating institution in the world.</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<p>In the global rankings of university quality, various factors are weighted slightly differently. The <a href="https://www.topuniversities.com/qs-world-university-rankings">QS World University Rankings</a> pay particular attention to reputation among colleagues in the discipline. The <a href="http://www.shanghairanking.com/">Academic Ranking of World Universities</a> (ARWU) considers citations in journals as a proxy for research quality. And the <a href="https://www.timeshighereducation.com/world-university-rankings">Times Higher Education World University Rankings</a> (THE) allocate equally across peer reputation, citation and institutional self-report surveys. </p>
<p>The systems are far from simple and universities increasingly invest in experts to advise on how to improve and maintain ranking scores, especially as more universities crowd the global ranking field.</p>
<p>If we are to accept this imperative to measure and rank universities by academic reputation, publishing record, teaching and research intensity, then we need to ask another question: what other indicators of quality and value might be included? </p>
<p>While online programs have often been considered inferior to “live” learning, for instance, the impact of COVID-19 has forced us to reconsider. There is now broader awareness of the opportunities online teaching opens up – including its positive impact on universities’ carbon footprints.</p>
<p>In fact, the THE rankings tracked progress towards the UN’s <a href="https://sustainabledevelopment.un.org/?menu=1300">Sustainable Development Goals</a> for the first time in 2019. One example of such sustainable activity is Goldsmiths College at the University of London, which <a href="https://www.theguardian.com/environment/2019/aug/12/goldsmiths-bans-beef-from-university-cafes-to-tackle-climate-crisis">banned</a> the sale of beef on campus. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/342363/original/file-20200617-94101-1gcsl6m.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Oxford University: ‘The Lord is my light’</span>
<span class="attribution"><span class="source">Shutterstock</span></span>
</figcaption>
</figure>
<h2>How do you measure intangible value?</h2>
<p>Taking an even broader view, might we consider the spiritual dimension of higher education? The university has long been valued for its divine contribution: Oxford University’s motto has been “<em>Dominus illuminatio mea</em>” (the Lord is my light) for at least 200 years. “O my Lord. Advance me in Knowledge” is the motto of the University of Karachi. </p>
<p>This marriage of the sacred and the scientific has been a theme since the founding of the University of al-Qarawiyyin in 859 AD in Morocco. It’s said to be the oldest continually operating higher educational institution in the world. </p>
<p>In the rush to measure quantifiable indicators of output have we obscured these less tangible forms of value? </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/covid-19-what-australian-universities-can-do-to-recover-from-the-loss-of-international-student-fees-139759">COVID-19: what Australian universities can do to recover from the loss of international student fees</a>
</strong>
</em>
</p>
<hr>
<p>If COVID-19 taught us anything, it was the value of communication and connection (sometimes called <a href="https://www.learning-theories.com/connectivism-siemens-downes.html">connectivism</a>). In fact, experts from universities came to the fore as rarely before. Rather than handing more influence to PR and social media experts, might this be an opportunity to re-create the university as the place for exchanging ideas, teaching and research?</p>
<p>Maybe we should look back to the House of Wisdom (بيت الحكمة), founded in Baghdad in 786 CE, where scholars met daily to translate, discuss and write in many languages: Arabic, Farsi, Hebrew, Aramaic, Syriac, Greek and Latin. Aristotle’s work was famously translated from Greek. So too the work of the physician Hippocrates. </p>
<p>What hadn’t been accessible was made accessible and shared. The “West” benefited from this knowledge from the East, laying the foundations for the Renaissance.</p>
<p>This was a true academy of the arts and sciences, valued not for its citations, number of Nobel Prize winners or the ratio of doctorates to bachelor degrees, but for the exchange of knowledge and ideas. One wonders how this global multilingual forerunner of a quality modern university might fare under our ranking regime.</p>
<p>By reaching back in history we might recover those other measures of quality and value that formed the foundations upon which modern universities are built. The adage that “if everything is to be as before, then all must change” rings true. How we value and rank the exchange of knowledge and ideas will once again become something worth striving for.</p><img src="https://counter.theconversation.com/content/140236/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The gravitational pull of global rankings consumes university energy and attention. But there had to be a better way to measure their value.Stephen Dobson, Professor and Dean of Education, Te Herenga Waka — Victoria University of WellingtonEdward Schofield, Reviews Advisor, Te Herenga Waka — Victoria University of WellingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1322872020-05-24T07:44:22Z2020-05-24T07:44:22ZWe think there’s a better way to assess the research of African academics: here’s how<figure><img src="https://images.theconversation.com/files/336457/original/file-20200520-152327-1ujpunf.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Many African researchers feel they should do research that would be acceptable for publication in Western outlets. </span> <span class="attribution"><span class="source">Oleksandr Rupeta/NurPhoto via Getty Images</span></span></figcaption></figure><p>In the past two decades, much has been made in academic circles about global <a href="https://www.universityworldnews.com/post.php?story=20191017111952212">rankings</a> of educational institutions. Bodies such as <a href="https://www.timeshighereducation.com/world-university-rankings">Times Higher Education</a> and <a href="http://www.webometrics.info/en">Webometrics</a> regularly rank universities based on a set of criteria. These include internationalisation of faculty and students, cited research publications and awards won by scholars. </p>
<p>This ranking phenomenon has increased the pressure on academics and researchers in Africa to present their research output in publishing outlets that are perceived as highly rated. </p>
<p>Career progression – for instance, access to grants, appointments and promotions – is now tied to individual ranking. Student enrolment and funding from government and other bodies to institutions are equally being influenced by institutional ranking. </p>
<p>Since the Western world usually leads in setting the criteria, academic prestige comes from conforming to Western standards in the execution and reportage of research projects. But some African researchers are <a href="https://www.degruyter.com/view/product/547217">now asking questions</a> about the fairness, transparency and reliability of these processes of evaluation and scholarly rankings. They are also concerned about the effect of Western expectations on African societies and their needs.</p>
<p>What matters most in <a href="https://aoasg.files.wordpress.com/2017/04/measuring-what-matters-webinar3.pdf">scholarly evaluation</a> is itself a matter of enquiry. Hence the need to acknowledge and accommodate the inherent limitations of funding, access, collaboration, standardisation and other <a href="https://journals.sagepub.com/doi/abs/10.1177/097172180901500104">constraints</a> faced by developing countries.</p>
<p>The desire of scholars and institutions in Africa to fit into the Western-imposed model despite the deficit of local research support infrastructure may be counterproductive in the quest to achieve sustainable development in Africa. </p>
<p>I belong to a group of African researchers in Nigeria who are concerned about this situation. We reviewed the status quo and conducted a survey to get the perspectives of researchers and education administrators from developing countries. </p>
<p>The <a href="https://www.degruyter.com/view/product/547217">survey results</a> indicate that the majority of African academics are concerned about the status quo. They would support a shift in publishing practices and the assessment of researchers. Such a shift should be supported by institutional administrators and policy makers. </p>
<h2>Consequences</h2>
<p>Western indexing houses track how often research is cited and publish the metrics of most publishing outlets. For this reason, many African researchers feel they should do research that would be acceptable for publication in such outlets. </p>
<p>This can have negative consequences. For example, there’s the issue of access and copyright. A study in Africa might be of national importance. But its publication may not readily be accessible to the researcher’s contemporaries or government since the copyright might rest with a commercial Western publishing outlet. </p>
<p>This impairs the development of rigorous science and limits the exploration and expansion of indigenous knowledge for regional advancement. </p>
<p>There are other consequences to focusing on meeting Western requirements for academic research. It undermines African potential to use the continent’s resources to tackle its own challenges. And encourages “brain drain” – when experts move from Africa to the developed world. </p>
<p>Those who make the rules control the market. This is also true in publishing and academia. The bodies that oversee acceptable publication outlets, universal patents, registration of internet domain names and hosting servers are all located in the West. It would come as little surprise that this has an influence on the access and ranking of all to the advantage of Western systems and institutions. </p>
<p>Furthermore, westernisation has largely been conflated with internationalisation or misconstrued for civilisation. The negative <a href="http://www.esthinktank.com/2018/12/29/westernization-in-africa-another-perspective/">impact of this on Africa</a> is well documented.</p>
<h2>What ought to be done</h2>
<p>Our survey offers <a href="https://content.sciendo.com/view/title/568519">suggestions</a> for governments and universities. </p>
<p>African governments should monitor and limit schemes that promote intercontinental collaboration and publications at the expense of intra-African and national publications. </p>
<p>Secondly, grant-giving foreign governments and agencies ought not to dictate what and how to research. Each nation must set its developmental priorities and align scientific research with them. </p>
<p>Thirdly, universities, grant-awarding bodies and educational ranking agencies need to revise their research evaluation methods. We came up with some new, relatively simple, but broadly useful metrics to assess research. For example: </p>
<p><strong>Total citation impact:</strong> a measure of how many times a research paper has been cited per year of existence. Rather than just a number of citations as presently used, our model states the citation rate over time. Stating that an article is cited three times per year on average is more informative than noting that it has been cited six times since its publication. </p>
<p><strong>Weighted author impact:</strong> a way of rating researchers, virtually independent of their respective disciplines. It evaluates the article’s impact rather than comparing the journal’s impact with other journals in its discipline. </p>
<p>We have also called for the establishment of an African indexing house. This would track publications and citation rates of scholarly works produced in Africa. The resultant confidence, fair play and opportunities for African and other researchers could stimulate greater productivity and national development.</p><img src="https://counter.theconversation.com/content/132287/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Olumuyiwa Sunday Asaolu does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The desire of scholars and universities in Africa to fit into a model imposed from elsewhere may hinder development in Africa.Olumuyiwa Sunday Asaolu, Associate Professor of Systems Engineering , University of LagosLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/629362016-08-09T00:22:47Z2016-08-09T00:22:47ZHere’s how competition makes peer review more unfair<figure><img src="https://images.theconversation.com/files/133404/original/image-20160808-18053-n0yxqr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What are the implications of peer review on competition in science?</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/wheatfields/1823836311/in/photolist-3MaCZD-pFtAKK-dbWvqC-56Jkta-ohrv6Y-cNpiqN-7gsUPm-56JkqV-4z9Kkf-2vVkpu-6ss3X5-a6YxaA-ph2K7C-p1DLYB-6Mw73S-o5zkQo-bbBnXr-7Pfj3S-bixL5T-4CX1b8-FLLyVh-dABED9-odLsUp-JDPJKo-6EAhPB-8H64QP-7tvVwq-aFzQmz-7cCUeW-6LFb8z-c8m5oY-8xhfNt-8sc1Xu-bWDHha-DSrJd-hJuMTm-bixKZv-s2a4p1-iYVLzN-nHKB6W-3iVZdR-ba4FpF-nTZgkJ-ekEQgP-2LRHmX-sfZ95s-hxCFnY-qdAhKb-c8m59d-fif4Fn">PROChristian Guthier</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>A scientist can spend several months, in many cases even years, strenuously investigating a single research question, with the ultimate goal of making a contribution – little or big – to the progress of human knowledge. </p>
<p>Succeeding in this hard task requires specialized, years-long training, intuition, creativity, in-depth knowledge of current and past theories and, most of all – lots of <a href="http://www.sciencemag.org/careers/2012/09/so-you-think-you-have-skills">perseverance</a>.</p>
<p>As a member of the scientific community, I can say that, sometimes, finding an interesting and novel result is just as hard as convincing your colleagues that your work actually is novel and interesting. That is, the work would deserve publication in a scientific journal.</p>
<p>But, prior to publication, any investigation must pass the <a href="https://theconversation.com/the-logic-of-journal-embargoes-why-we-have-to-wait-for-scientific-news-53677">screening</a> of the “peer review.” This is a critical part of the process – only after peer review can a work be considered part of the scientific literature. And only peer-reviewed work will be counted during hiring and evaluation, as a valuable unit of work.</p>
<p>What are the implications of the current publication system – based on peer review – on the progress of science at a time when <a href="http://iai.asm.org/content/83/4/1229.long">competition among scientists is rising?</a> </p>
<h2>The impact factor and metrics of success</h2>
<p>Unlike in math, not every publication counts the same in science. In fact, at least initially, to the eye of an hiring committee the weight of a publication is primarily given by the “impact factor” of the journal in which it <a href="http://www.sciencemag.org/careers/2013/09/beyond-cvs-and-impact-factors-employers-manifesto">appears</a>.</p>
<p>The impact factor is a metric of success that counts the average past “citations” of articles published by a journal in previous years. That is, how many times an article is referenced by other published articles in any other scientific journal. This index is a proxy for the prestige of a journal, and an indicator of the expected future citations of a prospective article in that journal. </p>
<p>For example, according to Google Scholar Metrics 2016, the journal with the <a href="https://googlescholar.blogspot.com/2016/07/2016-scholar-metrics-released_14.html">highest impact factor is Nature</a>. For a young scientist, publishing in journals like Nature can represent a career turning point, a shift from spending an indefinite number of extra years in a more or less precarious academic position to getting a university tenure. </p>
<p>Given its importance, publishing in top journals is extremely difficult, and rejection rates range from 80 percent to 98 percent. Such high rates imply that sound research can also fail to make it into top journals. Often, valuable studies rejected by top journals end up in <a href="http://www.sciencemag.org/careers/2008/08/if-first-you-dont-succeed-cool-revise-and-submit-again">lower-tier journals</a>.</p>
<h2>Big discoveries also got rejected</h2>
<p>We do not have an estimate of how many potentially groundbreaking discoveries we have missed, but we do have records of a <a href="http://link.springer.com/article/10.1140/epjst/e2011-01403-6">few exemplary wrong rejections</a>. </p>
<p>For example, economist <a href="https://www.econ.berkeley.edu/faculty/803">George A. Akerlof’s</a> seminal paper, <a href="https://www.iei.liu.se/nek/730g83/artiklar/1.328833/AkerlofMarketforLemons.pdf">“The Market for Lemons,”</a> which introduced the concept of “asymmetric information” (how decisions are influenced by one party having more information), was rejected several times before it could be published. Akerlov was later awarded the Nobel Prize for this and other <a href="https://www.aeaweb.org/articles?id=10.1257/jep.8.1.165">later work</a>. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=386&fit=crop&dpr=1 600w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=386&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=386&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=485&fit=crop&dpr=1 754w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=485&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/133401/original/image-20160808-18010-1c1dqo4.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=485&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Competition can increase innovation. Does it improve fairness in peer review?</span>
<span class="attribution"><a class="source" href="http://www.shutterstock.com/pic-382077415/stock-photo-group-of-young-scientists-studying-new-substances-in-flasks.html?src=UxwuJu-YbpYoaLw1WVYNvA-1-1">Scientists image via www.shutterstock.com</a></span>
</figcaption>
</figure>
<p>That’s not all. Only last year, it was shown that three of the top medical journals <a href="http://www.pnas.org/content/112/2/360.full">rejected 14 out 14 of the top-cited articles</a> of all time in their discipline.</p>
<p>The question is, how could this happen?</p>
<h2>Problems with peer review</h2>
<p>It might seem surprising to those outside the academic world, but until now there has been little empirical investigation on the institution that approves and rejects all scientific claims.</p>
<p>Some scholars even complain that peer review itself has <a href="http://www.sciencedirect.com/science/article/pii/S0165614700016187">not been scientifically validated</a>. The main reason behind the lack of empirical studies on peer review is the difficulty in accessing data. In fact, peer review data is considered very sensitive, and it is very seldom released for scrutiny, even in an <a href="http://science.sciencemag.org/content/341/6152/1331">anonymous form</a>.</p>
<p>So, what is the problem with peer review? </p>
<p>In the first place, assessing the quality of a scientific work is a hard task, even for trained scientists, and especially for innovative studies. For this reason, reviewers can often be in <a href="http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/0003-066X.45.5.591">disagreement</a> about the merits of an article. In such cases, the editor of a high-profile journal usually takes a conservative decision and rejects it. </p>
<p>Furthermore, for a journal editor, finding competent reviewers can be a daunting task. In fact, reviewers are themselves scientists, which means that they tend to be extremely busy with other tasks like teaching, mentoring students and developing their own research. A review for a journal must be done on top of normal academic chores, often implying that a scientist can dedicate <a href="http://www.nature.com/news/open-access-is-tiring-out-peer-reviewers-1.16403">less time to it than it would deserve</a>.</p>
<p>In some cases, journals encourage authors to <a href="https://www.researchgate.net/post/Is_it_a_good_thing_that_journals_ask_you_to_recommend_a_reviewer_for_you_article">suggest</a> reviewers’ names. However, this feature, initially introduced to help the editors, has been unfortunately <a href="http://www.nature.com/news/publishing-the-peer-review-scam-1.16400">misused</a> to create peer review rings, where the suggested reviewers were accomplices of the authors, or even the authors themselves with secret accounts.</p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/133402/original/image-20160808-18023-of3y6f.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">There are many problems with the peer review process.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/thomashawk/207153467/in/photolist-jiHvv-arvAm8-h1wCPX-e5BosA-jNZvxX-4Jp8FS-6hzXLM-6hAE8P-7gvTwm-i4WJsM-6hAAXg-51Gxsn-nHWyWM-6hAN2p-m4LXRM-6hAgx6-fzodPQ-6hAZgH-6hB4mB-6hEW5s-o1q8U8-568Rub-6hAAFZ-6hAbve-o17UTi-6hA4Tt-6hF3Hh-6hB2RV-6hEKtd-6hA7VZ-eJi1gb-6hEXxh-6hEndC-6hAg6P-snRpMc-6hAZdX-6hAJkK-6hEo5d-dbuvmP-6hARjt-6hAk32-6hEtZw-7grX6z-AXoxkm-6hEoaL-o3cu2K-6hzWQM-6hAPeD-7gkWHt-6hzYy8">Thomas Hawk</a>, <a class="license" href="http://creativecommons.org/licenses/by-nc/4.0/">CC BY-NC</a></span>
</figcaption>
</figure>
<p>Furthermore, reviewers have no <a href="https://www.timeshighereducation.com/news/should-academics-be-paid-for-peer-review">direct incentive to do a good review</a>. They are not paid, and their names do not appear in the published article. </p>
<h2>Competition in science</h2>
<p>Finally, there is a another problem, which has become worse in the last 15-20 years, where academic competition for funding, positions, publication space and credits has increased along with the <a href="http://www.nature.com/news/2011/110420/full/472276a.html">growth of the number of researchers</a>. </p>
<p>Science is a winner-take-all enterprise, where whoever makes the decisive discovery first gets all the fame and credit, whereas all the remaining researchers are forgotten. The competition can be fierce and the <a href="http://undsci.berkeley.edu/article/dna_01">stakes high</a>. </p>
<p>In such a competitive environment, experiencing an erroneous rejection, or simply a delayed publication, might have huge costs to bear. That is why some Nobel Prize winners no longer hesitate to publish their results in <a href="http://link.springer.com/article/10.1140/epjst/e2011-01403-6">low-impact journals</a>.</p>
<h2>Studying competition and peer review</h2>
<p>My coauthors and I wanted to know the impact such competition could have on peer review. We decided to <a href="http://www.pnas.org/content/early/2016/07/05/1603723113.full">conduct a behavioral experiment</a>. </p>
<p>We invited 144 participants to the laboratory and asked them to play the “Art Exhibition Game,” a simplified version of the scientific publication system, translated into an artistic context. </p>
<p>Instead of writing scientific articles, participants would draw images via a <a href="http://nodegame.org">special computer interface</a>. And instead of choosing a journal for publication, they would choose one of the available exhibitions for display. </p>
<p>The decision whether an image was good enough for a display would then be taken following the rule of “double-blind peer review,” meaning that reviewers were anonymous to the authors and vice versa. This is the same procedure adopted by the majority of academic journals. </p>
<p>Images that received high review scores were to be displayed in the exhibition of choice. They would also generate a monetary reward for the author.</p>
<p>This experiment allowed us to track for the first time the behavior of both reviewers and creators at the same time in a creative task. The study produced novel insights on the coevolution of the two roles and how they reacted to increases in the level of competition, which we manipulated experimentally. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/133403/original/image-20160808-18023-6j2dtw.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">How does peer review work on a creative task? (The image is for illustrative purpose and does not represent the actual experiment.)</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/catalystopensource/23887038194/in/photolist-CoPjf7-pjsGeB-Ekdvx4-EkcBtK-skACs5-dTcgR6-y8Uvc-p7qzMq-aef4qs-aef23y-daz6kD-s1zL9z-81Jimn-ao99C-2cfeNp-rZQMqF-8vSLwq-dtKGpR-e4HMjF-cmoLpJ-fCZUYM-adLKF5-eJ1gPM-y8Udb-63eF7y-wUQ2n-GNfGw-bvL8q9-ACVDWq-ACVGKS-BujNYV-ACV4c4-qR4AHy-GaZRJy-ACV2At-AMAF77-AMB23E-BcrpJo-FoGSYd-Gmi3ss-phbAkK-dJvgNh-cEJnWs-p2rqH-p2rpS-p2rqg-p2rtS-cEJrdJ-p2rpv-p2rsr">Catalyst Open Source</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In one condition, all the images displayed generated a fixed monetary reward. In another condition – the “competitive condition” – the reward for a display would be divided among all the successful authors. </p>
<p>This situation was designed to resemble the surge in competition for tenure tracks, funding and attention that <a href="http://science.sciencemag.org/content/344/6186/809.full">science has been experiencing</a> in the last 15-20 years. </p>
<p>We wanted to investigate three fundamental aspects of competition: 1) Does competition promote or reduce innovation? 2) Does competition reduce or improve the fairness of the reviews? 3) Does competition improve or hamper the ability of reviewers to identify valuable contributions?</p>
<h2>Here is what we found</h2>
<p>Our results showed that competition acted as a double-edged sword on peer review. On the one side, it increased the diversity and the innovativeness of the images over time. But, on the other side, competition sharpened the conflict of interest between reviewers and creators. </p>
<p>Our experiment was set up in a such a way that in each round of the experiment a reviewer would review three images on a scale from 0 to 10 (self-review was not allowed). So, if the reviewer and the (reviewed) author chose the same exhibition, they would be in direct competition. </p>
<p>We found that a consistent number of reviewers, aware of this competition, purposely downgraded the review score of the competitor to gain a personal advantage. In turn, this behavior led to a lower level of agreement between reviewers. </p>
<p>Finally, we also asked a sample of 620 external evaluators recruited from <a href="https://www.mturk.com">Amazon Mechanical Turk</a> to rate the images independently. </p>
<p>We found out that competition did not improve the average level of creativity of the images. In fact, with competition many more works of good quality got rejected, whereas in the noncompetitive condition more works of lower quality got accepted. </p>
<p>This highlights the trade-off in the current publication system as well.</p>
<h2>What we learned</h2>
<p>The experiment confirmed there is a need to reform the current publication system. </p>
<p>One way to achieve this goal could be to allow scientists to be evaluated in the long term, which in turn would decrease the conflict of interest between authors and reviewers.</p>
<p>This policy could be implemented by granting <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.1756-2171.2011.00140.x/abstract;jsessionid=5B2E8B1BB12AFE06FC287F11EE2D8880.f03t01">long-term funding</a> to scientists, reducing the urge to publish innovative works prematurely and giving them time to strengthen their results in front of peer review. </p>
<p>Another way could imply removing the requirement of “importance” of a scientific study, as some journals, like <a href="http://journals.plos.org/plosone/s/reviewer-guidelines">PLoS ONE</a>, are already doing. This would give higher chances to more innovative studies to pass the screening of peer review.</p>
<p><a href="http://www.peere.org/">Discussing openly</a> the problems of peer review is the first step toward solving them. Having the courage to experiment with alternative solutions is the second.</p><img src="https://counter.theconversation.com/content/62936/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stefano Balietti does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Peer review is a crucial part of the academic publication system. It is also a critical part of the hiring and evaluation process. What’s the problem with peer review?Stefano Balietti, Postdoctoral Research Fellow, Northeastern UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/615642016-06-27T05:02:35Z2016-06-27T05:02:35ZHow time-poor scientists inadvertently made it seem like the world was overrun with jellyfish<figure><img src="https://images.theconversation.com/files/127994/original/image-20160624-30263-1jknmgz.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A plague, or just an artefact?</span> <span class="attribution"><span class="source">Jacob Gruythuysen</span>, <span class="license">Author provided</span></span></figcaption></figure><p>When is a jellyfish plague not (necessarily) a jellyfish plague? When time-poor scientists selectively cite the literature to make it look like the oceans are flooded with jellies – even when it’s far from clear that they really are.</p>
<p>What does scientists being in a rush have to do with jellyfish populations? Let’s start from the beginning. </p>
<p>The identification of patterns and trends in nature happens through the accumulation of consistent observations, published in scientific reports. Once observed, the emerging patterns are usually reported in narrative reviews, which often stimulates a flurry of research activity in that field.</p>
<p>Eventually, the purported patterns are formally tested using “meta-analyses” of the published literature, to either confirm the pattern and establish it as theory, or refute it. </p>
<p>This path from the primary observations to theory can be traced through a network of citations.</p>
<p>Science, however, is done by humans and citation practices are subject to <a href="http://www.int-res.com/abstracts/meps/v408/p299-303/">errors of bias and accuracy</a>. Citation practices that are biased in a particular direction have the potential to lead to the identification of false patterns and flawed theory.</p>
<h2>Enter the jellies</h2>
<p>In the 1990s and 2000s, reports began to appear in the scientific literature of increased jellyfish populations in some parts of the world’s oceans. Various <a href="https://faculty.washington.edu/cemills/jellyblooms2001.pdf">reviews</a> reported the possibility that jellyfish blooms might be increasing globally. Over time, these became <a href="http://www.int-res.com/abstracts/meps/v350/p153-174/">increasingly assertive</a> about the existence and extent of the trend, until researchers were asking what to do about the increasingly “<a href="http://www.swansea.ac.uk/bs/turtle/reprints/Richardson%20et%20al%202009%20TREE%20-%20The%20Jellyfish%20Joyride.pdf">gelatinous state</a>” of the oceans worldwide. </p>
<p>The question of whether the global jellyfish boom was real or not was tested by two meta-analyses – which came to opposite conclusions. A <a href="http://www.everythingconnects.org/uploads/7/0/3/5/7035190/art3a10.10072fs10750-012-1039-71.pdf">2012 study</a> concluded that populations were increasing globally because they found evidence for increasing populations in 62% of large marine ecosystems tested (although low certainty was assigned to two-thirds of these). The following year, <a href="http://www.ncbi.nlm.nih.gov/pubmed/23277544">another study</a> found that only 30% of populations were increasing. It concluded that jellyfish populations wax and wane over several decades. </p>
<p>So, in reality, the scientific community is still divided over whether there really has been a sustained global increase in jellyfish numbers.</p>
<h2>What about perception?</h2>
<p>We wanted to know whether the perception of a global increase in jellyfish blooms was at least partly due to poor citation practices in the scientific literature. Our research, <a href="http://onlinelibrary.wiley.com/doi/10.1111/geb.12474/abstract">published in Global Ecology and Biogeography</a>, suggests that it was.</p>
<p>Citation practices can be flawed in several ways:</p>
<ul>
<li><p><strong>Unsupported citations</strong> are when authors cite sources that contain no evidence that could possibly support the author’s claim.</p></li>
<li><p><strong>Selective citations</strong> happen when a paper is cited to support a claim but contrasting evidence provided in the same paper is ignored, or when authors choose to cite earlier papers that have since been refuted.</p></li>
<li><p><strong>Ambiguous citations</strong> happen when an author’s sentence contains multiple phrases, but the citations used to support each phrase are clustered at the end of the sentence, preventing readers from telling which is which.</p></li>
<li><p><strong>Empty citations</strong> are when authors cite a paper that cites another paper as evidence for the claim, rather than the original source (also called “lazy author syndrome”).</p></li>
</ul>
<p>We comprehensively searched the literature for papers, published before the two meta-analyses, that issued statements regarding trends in jellyfish populations. We classified each statement according to its affirmation and direction (that is, whether it said jellyfish are “increasing”, “may be increasing”, “decreasing”, or “not sure”), as well as their geographic extent (global, multiple regions, or one region). </p>
<p>We then assessed the papers cited as evidence of the statement, to see whether the citations were accurate or whether they fell into one of the categories of flawed citations outlined above.</p>
<h2>A (jelly)fishy tale?</h2>
<p>Of 159 papers that had issued statements about trends in jellyfish, 61% claimed that populations were increasing (27% at the global scale and 34% in multiple regions) and 25% asserted that populations <em>may</em> be increasing. Only 10% of papers said the data were equivocal. Just one reported that populations were decreasing (but at a local scale).</p>
<p>Most concerning was that only 51% of papers cited were considered to provide unambiguous support for the statements made by the authors. Almost all statements based on unsupportive citations were those claiming that jellyfish were increasing globally (despite the fact that it would have been impossible to make any claims about global trends before the first global meta-analysis was published in 2012). And in all cases, selective citations were biased towards claims that jellyfish populations were increasing.</p>
<p>Pressure to publish in prestigious journals and win research funds may lead some scientists to make claims that reach beyond the evidence available. In most cases, however, citation errors are not overt attempts to distort the evidence. Rather, they probably arise because increasing academic workloads reduce the time available to evaluate papers accurately and to keep abreast of the <a href="http://www.int-res.com/abstracts/meps/v408/p299-303/">almost exponential increase in the volume of literature being published</a>. </p>
<p>As scientists, we need to ensure that our claims are always supported by robust evidence because it is apparent that poor citation practices – and, in particular, selective citation of the literature – can distort perceptions within a research field.</p><img src="https://counter.theconversation.com/content/61564/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Rob Condon receives funding from US National Science Foundation.</span></em></p><p class="fine-print"><em><span>Carlos M. Duarte, Cathy Lucas, Charles Novaes de Santana, Kylie Pitt, and Marina Sanz-Martín do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>How flawed citation practices can perpetuate scientific ideas even before they’ve been fully established as true.Kylie Pitt, Associate Professor, Griffith UniversityCarlos M. Duarte, Adjunct professor, King Abdullah University of Science and TechnologyCathy Lucas, Associate Professor, Marine Biology & Ecology Research Group (MBERG), University of SouthamptonCharles Novaes de Santana, Postdoctoral research associate, University of ZurichMarina Sanz-Martín, Researcher, Department of Global Change Research, Universitat de BarcelonaRob Condon, Assistant Professor in Biological Oceanography, University of North Carolina WilmingtonLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/609322016-06-21T20:11:39Z2016-06-21T20:11:39ZShould academics cite those who have breached moral and humane borders?<figure><img src="https://images.theconversation.com/files/127438/original/image-20160621-8856-cnqvv5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Does citing a scholar run the risk of being perceived as validating not only the research, but the researcher? </span> <span class="attribution"><span class="source">Michael Brace/flickr</span>, <a class="license" href="http://creativecommons.org/licenses/by-nc-nd/4.0/">CC BY-NC-ND</a></span></figcaption></figure><p>In the United States and, indeed, worldwide, academics have been shocked by the arrest of a University of Cincinnati <a href="http://local12.com/news/local/uc-professor-arrested-on-child-porn-charges">professor of classics, Holt N Parker, on child pornography charges</a>. Parker was arrested on charges of distribution and receipt of child pornography in mid-March and promptly suspended from his academic position.</p>
<p>Shocking as it may be, such an incident is nothing new. Scholars have long encountered skeletons in the academic closets of peers and intellectual heroes. Personal misdemeanors or crimes range from longstanding mistreatment of family and friends to offensive political beliefs and obscene acts.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=950&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=950&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=950&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1194&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1194&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127468/original/image-20160621-16045-202gmy.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1194&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Louis Althusser.</span>
<span class="attribution"><span class="source">via Wikimedia Commons</span></span>
</figcaption>
</figure>
<p>French theorist <a href="http://plato.stanford.edu/entries/althusser/">Louis Althusser</a> (1918-1990) is still revered in some academic quarters as an important and influential <a href="https://ceasefiremagazine.co.uk/in-theory-althusser-1/">Marxist theorist</a>. His work on <a href="https://faculty.washington.edu/mlg/courses/definitions/Interpellation.html">interpellation</a>, the cultural process whereby ideas become embedded and structure one’s life, continues to influence academic disciplines from sociology, anthropology, film studies and feminist theory. </p>
<p>In an entry on Althusser in the <a href="http://plato.stanford.edu/entries/althusser/#Mar">Stanford Encyclopedia of Philosophy</a>, the decline of interest in his reading of Marx is attributed in part to the “ill-fated facts of his life”.</p>
<p>This is a somewhat casual allusion to Althusser’s murder of his wife, Hélène in 1980. A luxurious description of the act opens his <a href="http://www.independent.co.uk/voices/getting-away-with-murder-its-the-talk-of-paris-how-louis-althusser-killed-his-wife-how-he-was-an-1530755.html">autobiography</a>:</p>
<blockquote>
<p>I place my two thumbs on the hollow of flesh round the top of the breastbone and, applying pressure, one thumb to the right, the other aslant to the left, I slowly reach the harder zone beneath the ears. I massage in a V. I feel a great muscular fatigue in my forearms; they ache whenever I give a massage.</p>
<p>Helene’s features are serene and motionless, her open eyes gazing up at the ceiling.</p>
</blockquote>
<p>Althusser never stood trial. Instead, he was admitted to Sainte-Anne psychiatric hospital and, later, Soisy-sur-Seine. After three years, he was released, but he continued to be regularly readmitted to institutional care until his death. </p>
<p>Can scholars separate the murderer from the philosopher? According to <a href="https://carleton.ca/philosophy/people/geraldine-finn-2/">Geraldine Finn</a> in <a href="https://www.goodreads.com/book/show/10901348-why-althusser-killed-his-wife">Why Althusser Killed His Wife: Essays on Discourse and Violence</a> (1995), the answer is “no”:</p>
<blockquote>
<p>His philosophical and intellectual practice cannot be separated from his personal and emotional practice: they are rooted in the same soil and have the same material, social, historical and ideological conditions of possibility …</p>
</blockquote>
<p>Finn attributes the act not to reports that Althusser was suffering from a psychotic episode, but to the conditions of a society that enables male scholars and their work – at the expense of women. </p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=738&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=738&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=738&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=928&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=928&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127447/original/image-20160621-8861-4jogtn.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=928&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Paul de Man.</span>
<span class="attribution"><span class="source">Goodreads</span></span>
</figcaption>
</figure>
<p><a href="http://www.britannica.com/biography/Paul-de-Man">Paul de Man</a> (1919-1983) did not strangle his wife, but he too poses ethical conundrums for scholars.</p>
<p>A Belgian literary theorist and leading figure of <a href="http://www.britannica.com/topic/deconstruction">deconstructionism</a>, de Man’s intellectual legacy began to seriously crumble in 1987, some three years after his death. </p>
<p>The cause was the discovery of several articles <a href="http://www.lrb.co.uk/v11/n06/frank-kermode/paul-de-mans-abyss">published in the Belgian pro-Nazi newspaper, Le Soir</a> during the War. One in particular was unambiguously anti-Semitic. In addition to his writing, de Man mingled socially with the Nazis in Belgium, and maintained allegiance to the occupation regime after relocating to France in 1941.</p>
<p>Interestingly, de Man’s anti-Semitic essay has sometimes been linked to his work on deconstructionism. Namely, to think and to write is to theorise. Accordingly, what one writes does not represent a definitive meaning. Nor does it represent the definitive beliefs of its author. While this is a slippery interpretation of de Man’s anti-Semitic writing, it demonstrates the link between the public work of the scholar and the private life of the scholarly individual. </p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/a1dx7rM92uQ?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
</figure>
<p>Similarly, in the case of Parker, media reports have attempted to forge an intimate connection between his scholarship and the acts leading to his arrest. As Parker built an impressive academic career on the study of ancient sexualities – a confronting subject to many members of the general public – the scholarship, the scholar and the private man have become intertwined. </p>
<p>The examples of academics such as de Man, Althusser and Parker (although the latter has not yet been convicted of any crime) provoke a series of ethical questions. Should the research of such intellectuals be assigned to the academic junk pile? Or should scholars continue to cite their work?</p>
<p>If scholarship is regarded as an intimate part of a scholar, as inseparable as an arm or a leg, then the answer is probably “no.” Apropos: the scholarship is seen as tainted or inherently corrupt, as per Finn’s stance on Althusser. </p>
<p>Alternatively, a “no” may come from a more general moral unease. Apropos: the act is suitably vile that a protest is in order. Either way, the research is assigned its own sentence: solitary confinement in the form of censorship. But a “no” may be to the detriment of new work and therefore to scholarship.</p>
<p>A “yes” keeps the work in academia. But does the citing scholar run the risk of being perceived as validating not only the research, but the researcher? </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=814&fit=crop&dpr=1 600w, https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=814&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=814&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1023&fit=crop&dpr=1 754w, https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1023&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/127443/original/image-20160621-8856-u7daci.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1023&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Knut Hamsun pictured in 1890.</span>
<span class="attribution"><span class="source">Wikimedia Commons</span></span>
</figcaption>
</figure>
<p>Would citation endorse inhumanity, cruelty, racism and other corruptions? Would censuring and censorious scholars be promoting humanity, kindness and racial harmony by shunning authors such as <a href="http://www.nobelprize.org/nobel_prizes/literature/laureates/1920/hamsun-bio.html">Nobel laureate Knut Hamsun</a>, for example? </p>
<p>Awarded the Nobel Prize for Literature in 1920 for his philosophical and exquisitely composed novels, Hamsun also wrote <a href="https://en.wikipedia.org/wiki/Knut_Hamsun%27s_obituary_of_Adolf_Hitler">an obituary for Hitler</a>. Would extending censorship to artists such as Hamsun, the subject of academic endeavours, open up the floodgates of political correctness and lead to the definitive editing of geniuses such as <a href="http://www.theatlantic.com/magazine/archive/2015/09/the-coddling-of-the-american-mind/399356/">Ovid</a> and <a href="https://theconversation.com/should-shakespeare-be-censored-for-sensitive-times-38336">Shakespeare</a>?</p>
<p>But what if a scholar cannot do anything else but not cite? What if the actions or beliefs of an artist or intellectual are so repellent, so abject that they incite a response that is not cerebral but something deeper, something innately emotional?</p>
<p>What if Althusser’s strangulation of his wife and his description of it are so shocking that some scholars simply cannot cite his work? For such scholars, the author is not dead, and will never be dead (pace <a href="https://www.theguardian.com/books/booksblog/2010/jan/13/death-of-the-author">Roland Barthes</a>). Instead, the author is a living, breathing monster and always will be. And, as with all monsters and their (written) progeny, they should be locked away forever.</p><img src="https://counter.theconversation.com/content/60932/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Marguerite Johnson does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Scholars have long encountered skeletons in the academic closets of peers and intellectual heroes. But is there a point where a scholar’s behaviour is so taboo that their research should be consigned to the academic junk pile?Marguerite Johnson, Associate Professor of Ancient History and Classical Languages, University of NewcastleLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/546482016-02-12T04:08:40Z2016-02-12T04:08:40ZAre citation rates the best way to assess the impact of research?<p>The United Kingdom’s Medical Research Council (MRC) - the equivalent of Australia’s National Health and Medical Research Council - recently released its 2014-15 <a href="http://www.mrc.ac.uk/publications/browse/mrc-economic-impact-report-2014-15/">economic impact report</a>, details of which make for interesting reading.</p>
<p>Since 2006, research funded by the MRC resulted in more than 94,000 publications, 63,000 of which (67%) were peer-reviewed. </p>
<p>The traditional starting point for considering the scientific impact of research are its citations. This is how many other research papers and editorials subsequently cite a given paper into the future or in a given number of years since publication.</p>
<p>Different numbers of researchers are involved in different fields of research. So when a relatively small number of scientists work in a study area, even if they write a spectacularly important paper, it can still receive a small fraction of the citations a comparably important paper receives in an area where more researchers are involved.</p>
<p>It would be unreasonable and misleading to conclude that a leading researcher in a small field had less scientific impact than one in a big field. Also, as can be seen from <a href="https://www.timeshighereducation.com/news/citation-averages-2000-2010-by-fields-and-years/415643.article">this</a> table, a paper that has been published for a short time has naturally had less time to be cited than one that has been out for longer.</p>
<h2>Normalised citation impact</h2>
<p>For these reasons, citation analysts use the normalised citation impact (NCI) to adjust citation volumes in different fields. This allows them to be compared in analyses of entire research funding schemes, national research and international activity.</p>
<p>The MRC report provides a <a href="https://twitter.com/SimonChapman6/status/695362934956912641">graph</a> showing the NCIs for all MRC-funded research publications for the sample period 2006-13. The average NCI citation for all papers in health and medical fields across these eight years was a desultory 2.08. And this was more than twice the world average. </p>
<p>The report notes that of more than six million papers, more than a fifth (21%) had not been cited, while only 3% of the MRC funded research had no citations to date.</p>
<p>What counts as a high and very high citation in the MRC data is also interesting. “Highly cited” means a paper with just four or more cites and “very highly cited” is anything with eight or more.</p>
<p>The situation in Australia is more opaque. A <a href="https://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/nh164_measuring_up_2013_140218.pdf">2013 report</a> provides lots of comparative data that show Australian health and medical researchers punch well above our weight compared to other nations and our small population. </p>
<p>But nowhere could I find comparable data to that provided by the MRC report. However, the NHMRC report notes:</p>
<blockquote>
<p>The citation distribution among publications is very skewed. While very few publications achieve high citation counts, a vast majority receive very few or no citations at all.</p>
</blockquote>
<p>Citation is of course only a measure of interest in your work by other researchers. </p>
<p>Journals that provide all or some of their papers as open access to anyone, often have daily updated counters showing the number of readers who have been to the publication. Not all of these will have read it and only land on it in a search for something else, so the data exaggerate actual readers.</p>
<p>My most read <a href="http://injuryprevention.bmj.com/content/12/6/365.full.pdf+html">paper</a> is one looking at the incidence of gun massacres and deaths ten years after the 1996 Australian gun law reforms. </p>
<p>It received some 136,946 views since 2006, 82,312 of which were in December 2012 in the aftermath of the Sandy Hook school shootings in the United States. I tweeted a link to the paper, that saw it go viral. </p>
<p>Ironically, the paper wasn’t funded by any research grant.</p>
<h2>Altmetric</h2>
<p>A metric increasingly being used by researchers since 2013 to demonstrate wider interest in their work is <a href="http://www.altmetric.com/?gclid=CjwKEAiA__C1BRDqyJOQ8_Tq230SJABWBSxnhQv2O0ijZ9kz2OukfHcs4YDLmTtezHL5AYZ8rOo8CBoC1zjw_wcB">Altmetric</a>. </p>
<p>The Altmetric score provides an index of the extent to which a research paper is being circulated and discussed on social media and covered in the news media.</p>
<p>My 2006 firearms paper has a stratospheric Altmetric score of 2,118. The 100 highest Altmetric scoring papers across all research fields in 2015 are listed <a href="http://www.altmetric.com/top100/2015/#explore">here</a>. Had mine been published in 2015 with the same attention it has received it would have had the seventh highest Altmetric score that year.</p>
<p>The MRC report provides data on a range of social impacts that go well beyond metrics. The data assesses the direct impact of research on outcomes that may have social and economic impact. </p>
<p>These include:</p>
<ul>
<li><p>The development of more than 4,400 instances of influence on policy and practice - 416 new in 2014, including 472 citations in clinical guidelines.</p></li>
<li><p>The development of more than 1,000 products and interventions - 126 new in 2014.</p></li>
<li><p>The creation or growth of 88 companies - seven new in 2014.</p></li>
<li><p>Approximately 1,081 patents - 37 filed or granted in 2014, with discoveries related to 246 (23%) of these patents already licensed worldwide.</p></li>
</ul>
<p>In work we <a href="http://health-policy-systems.biomedcentral.com/articles/10.1186/1478-4505-13-3">published</a> in 2015 examining the “impact in society” of intervention research in health and medical research funded by the NHMRC, we found 38% of research projects could demonstrate some level of social or health service impact. </p>
<p>We investigated the characteristics of those projects that demonstrated impact and compared them to those that didn’t.</p>
<p>Our study indicated that sophisticated approaches to intervention development, dissemination actions and translational efforts, are actually widespread among experienced researchers, and can achieve policy and practice impacts. </p>
<p>However, it was the links between the intervention results, further dissemination actions by researchers and a variety of contextual factors after the research that ultimately determined whether a study had policy and practice impacts. </p>
<p>Given the complicated interplay between various factors, there (alas) appears to be no simple formula for determining which intervention studies should be funded in order to achieve optimal policy and practice impacts.</p>
<p>The judgement of which research applications should be funded is, and is likely to remain, a very inexact science.</p><img src="https://counter.theconversation.com/content/54648/count.gif" alt="The Conversation" width="1" height="1" />
The traditional starting point for considering the scientific impact of research are its citations. But judging which research applications should be funded is a very inexact science.Simon Chapman, Emeritus Professor in Public Health, University of SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/218122014-01-08T13:49:36Z2014-01-08T13:49:36ZFrom the art world to fashion to Twitter, we’re all living in bubbles<figure><img src="https://images.theconversation.com/files/38601/original/ncwm5tqm-1389120472.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">We're all living in a bubble these days.</span> <span class="attribution"><span class="source">The Infatuated</span></span></figcaption></figure><p>The term “bubble” is now part of everyday conversation, particularly since the financial crisis of 2008. But bubbles are not just a problem in the world of banking. They can affect the choices you make every single day.</p>
<h2>How financial bubbles work</h2>
<p><a href="http://www.newscientist.com/article/mg19926651.700-why-economic-theory-is-out-of-whack.html">Recent studies</a> seem to suggest that too much liquidity is poisonous rather than beneficial for a financial market. Monetary liquidity in excess – stimulated by easy access to credit, large disposable incomes and lax lending standards – combined with expansionary monetary policies, such as when banks lower interest rates or states offer tax breaks, flush the market with capital. This extra liquidity leaves financial markets vulnerable to volatile asset price inflation, the cause of which is to be found in short-term and possibly leveraged speculation by investors.</p>
<p>When these combine to create a bubble, we end up with too much money chasing not enough assets. These assets – both bad and good – become elevated beyond their fundamental value to an unsustainable level. Add <a href="https://theconversation.com/all-those-likes-and-upvotes-are-bad-news-for-democracy-21547">socio-psychological phenomena</a> like boom-thinking, group-thinking, herding and informational cascades, and it’s just a matter of time before the bubble <a href="http://www.thefreelibrary.com/Financial+Market+Bubbles+and+Crashes.-a0237362479">bursts</a>.</p>
<h2>Your daily bubble</h2>
<p>Behind every financial bubble, crash and subsequent crisis <a href="http://press.princeton.edu/titles/9934.html">lurks a political bubble</a>. These political bubbles are powerful enough to influence financial markets, as are bubbles in the property market. But you are touched by bubbles even when you are not buying a house or dabbling in the stock market.</p>
<p><a href="http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html">Filter bubbles</a>, for example, happen when the information we receive online becomes so tailored to our existing areas of interest that we are no longer exposed to views that challenge us. If we only follow like-minded people on Twitter, we start to live in a bubble in which counter-opinions don’t feature. </p>
<p>The fashion industry benefits in particular from bubbles. Getting everybody, or a selective few, to trend the same way is the entire point of its existence, aside from the occasional claims about artistic diligence.</p>
<p>And even the art scene is ridden with bubbles. As Julian Spalding wrote in <a href="http://www.independent.co.uk/voices/commentators/julian-spalding-damien-hirsts-are-the-subprime-of-the-art-world-7586386.html">The Independent</a> fortunes are made and lost when people buy into a particular artist and the hype around their work grows.</p>
<p>In science, academic papers gather attention when they are cited by others. While individual scientists may express doubt about the validity of using these citations as a measure of quality, they are still very much a part of assessment. This kind of endorsement from colleagues and institutions can be an important factor when grant proposals are submitted or when academics apply for promotion. Here, intellectual liquidity – in terms of the limited funding available for research – combines with lemming behaviour among scientists just like in a <a href="http://link.springer.com/article/10.1007%2Fs13347-013-0142-7">ballooning financial market</a>.</p>
<p>All these bubbles push collectives of agents in the same direction, often with negative results. People in a bubble not only buy the same stock or real estate but also think the same thing, subscribe to the same news, hold the same opinions and appreciate the same art. They “like” the same posts on social media, buy the same brand names and read the same science.</p>
<h2>Informational cascades</h2>
<p>Once you’ve put out your opinion on the marketplace of ideas, be it political, religious or just a personal view, it can gain popularity or prominence as it spreads. That opinion then becomes an asset that can be valued according to the number of people apparently subscribing to it in terms of likes, upvotes, clicks or similar endorsements, even if to like something online requires minimum personal investment.</p>
<p>Public opinion tends to shift depending on a variety of factors ranging from zeitgeist, new facts, current interests to premiums of social imprimatur. Opinion bubbles may accordingly suddenly go bust or gradually deflate – a recipe that was recently satirically documented by BuzzFeed in <a href="http://www.buzzfeed.com/tomphillips/the-29-stages-of-a-twitterstorm">The 29 Stages of a Twitterstorm</a>. Everyday personal opinions serve as intellectual liquidity, chasing assets of political or cultural ideas. As in financial markets, both can end up with exaggerated values.</p>
<p>Across spheres, from science to your wardrobe, bubbles share similar structures and dynamics. The term “bubble” is no longer confined to just financial movements. In the information age, it can refer to irrational, collective, aggregated behaviour, beliefs, opinions or preferences based on social proof in <a href="http://www.springer.com/medicine/book/978-3-319-03831-5">all parts of society</a>.</p>
<p>Over in finance, informational cascades have become a major factor in the <a href="http://www.thefreelibrary.com/Financial+Market+Bubbles+and+Crashes.-a0237362479">generation of bubbles</a>, where, as economist Harold Vogel notes, “individuals choose to ignore or downplay their private information and instead jump the bandwagon by mimicking the actions of individuals acting previously”. If you think about your own actions every day, you might uncover some uncomfortable truths about the bubbles you live in.</p><img src="https://counter.theconversation.com/content/21812/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Vincent F Hendricks does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>The term “bubble” is now part of everyday conversation, particularly since the financial crisis of 2008. But bubbles are not just a problem in the world of banking. They can affect the choices you make…Vincent F Hendricks, Professor of Formal Philosophy, University of CopenhagenLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/186992013-10-01T11:01:35Z2013-10-01T11:01:35ZScientists must share early and share often to boost citations<figure><img src="https://images.theconversation.com/files/32241/original/khqmsqr3-1380606028.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">A new study shows there are benefits to researchers sharing their data.</span> <span class="attribution"><span class="source">opensourceway</span></span></figcaption></figure><p>“<a href="http://en.wikipedia.org/wiki/Publish_or_perish">Publish or perish</a>” is a well-known maxim within academia. It is introduced to researchers early in their careers, often by a PhD supervisor, keen for his or her students to start building a career. </p>
<p>While we researchers tell ourselves that we publish for important and admirable reasons - to contribute to the global field of knowledge, to expand the field - deep down we all know that, at least in part, we are publishing for selfish reasons. </p>
<p>We want to contribute to our CV and expand our chances of getting a job and grant money. To survive in academia, one must “publish early and publish often”.</p>
<p>Unfortunately, this reality has contributed to a culture of secrecy, especially within the sciences. Being “scooped” is a fear that is often talked about and is usually <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0021101">a major argument</a> against sharing the data underlying research papers. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=704&fit=crop&dpr=1 600w, https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=704&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=704&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=884&fit=crop&dpr=1 754w, https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=884&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/32237/original/32jxsvv7-1380603423.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=884&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Niklas Wikström</span></span>
</figcaption>
</figure>
<p>But in these days of “connected science,” “eResearch” and all those other buzzwords and neologisms, can we really afford not to share data?</p>
<p>A <a href="https://peerj.com/articles/175">paper</a> published overnight by American researchers <a href="http://researchremix.wordpress.com/about/">Heather Piwowar</a> and <a href="http://bio.unc.edu/people/faculty/vision/">Todd Vision</a> in the open access journal <a href="https://peerj.com/">PeerJ</a> has finally reliably demonstrated what many data sharing advocates have been saying for a long time. </p>
<p>Far from hurting the ability to publish, sharing data in a public repository can actually lead to a tangible benefit to your publication record through increased citations.</p>
<h2>Share and share alike</h2>
<p>The benefits of data sharing are accepted, at least in theory, by the academic community, but <a href="http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0021101">data sharing rates</a> are still low. In a bid to increase rates, research funders have begun to, if not mandate, then “strongly encourage” data sharing practises. </p>
<p>Both the US <a href="http://grants.nih.gov/grants/guide/notice-files/NOT-OD-03-032.html">National Institutes of Health</a> and <a href="http://www.nsf.gov/pubs/policydocs/pappguide/nsf11001/aag_6.jsp#VID4">National Science Foundation</a> now require grant applicants to at least include a data sharing plan in their applications. In Australia, the National Health and Medical Research Council (<a href="nhmrc.gov.au">NHMRC</a>) is a signatory to a <a href="http://www.wellcome.ac.uk/About-us/Policy/Spotlight-issues/Data-sharing/Public-health-and-epidemiology/WTDV030690.htm">joint statement</a> that says some nice things about data sharing, but only in the field of public health. </p>
<p>However, these policies do not yet mandate full data sharing, because, unlike open access publishing, open access data sharing is still seen by the people who generate the thing being made open as a potential threat.</p>
<p>Truly free and open data sharing will only occur when the people that generate the data feel comfortable sharing it. And that will only happen when it can be shown that sharing your data doesn’t hurt your ability to publish.</p>
<h2>More citations, please</h2>
<p>Citations of your work in other peer-review journal articles are an important factor in determining your track record. They can be thought of (rather crudely) as the intellectual equivalent of Facebook “likes”. Someone who publishes a lot, but isn’t cited often, is like that friend we all have that posts a lot but has nothing interesting to say. </p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=401&fit=crop&dpr=1 600w, https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=401&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=401&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=504&fit=crop&dpr=1 754w, https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=504&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/32236/original/f2v4vq8h-1380602251.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=504&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Ksayer1</span></span>
</figcaption>
</figure>
<p>Measuring both the number of papers that someone has published with the number of times that those papers have been cited (as in <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1283832/">Hirsch’s <em>h</em>-index</a>), can give a very useful indication of the impact and relevance of their work.</p>
<p>What Piwowar and Vision have shown is that papers that reported on research where the underlying data was made available in a public repository received 9% more citations than similar studies for which the data was not made available. </p>
<p>To arrive at this conclusion they analysed the citation counts of 10,555 papers on gene expression studies that created <a href="http://en.wikipedia.org/wiki/Microarray_databases">microarray data</a>. These types of studies routinely generate large amounts of raw data by measuring the activity of sometimes thousands of different genes in multiple samples. </p>
<p>A quarter of the papers analysed in this experiment described studies that made data discoverable in one of the two most widely-used gene expression microarray repositories: the US National Center for Biotechnology Information’s <a href="http://www.ncbi.nlm.nih.gov/geo/">Gene Expression Omnibus</a> and the European Bioinformatics Institute’s <a href="http://www.ebi.ac.uk/arrayexpress/">ArrayExpress</a>. The remaining 75% merely reported the outcomes of each study’s analysis. The underlying data was not made available for reuse in other studies or to confirm the veracity of the original claims.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=709&fit=crop&dpr=1 600w, https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=709&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=709&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=891&fit=crop&dpr=1 754w, https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=891&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/32240/original/wktjqnbt-1380604181.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=891&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">opensourceway</span></span>
</figcaption>
</figure>
<p>By comparing the number of citations that papers with and without publicly shared data available, Piwowar and Vision demonstrated a small but significant increase in the number of times papers with data available were cited.</p>
<p>The “citation benefit” <a href="http://www.prio.no/Publications/Publication/?x=600">has</a> <a href="http://www.nature.com/ng/journal/v41/n2/full/ng.295.html">been</a> <a href="http://arxiv.org/abs/1111.3618">studied</a> <a href="http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0000308">before</a>, but these prior studies have suffered from a number of confounding factors. Citation rates can be affected by a number of things: the journal that published the paper, its impact factor, citation half-life and open access policy, to name a few. </p>
<p>The large size of this new study meant that these factors (43 in total) could be corrected for and the association between data availability and citation rate isolated with more accuracy.</p>
<p>These findings should go someway to helping convince academics that data sharing can have direct personal benefits as well as benefits to their field at large. Most interestingly, the benefit of increased third-party citations appears to persist (six years) beyond the window when most authors publish subsequent papers on the same underlying data (two years).</p>
<p>Sad as it may seem, personal interest may just be the thing that allows academia to transition, as Piwowar and Vision say, to a culture that simply expects data to be part of the published record.</p><img src="https://counter.theconversation.com/content/18699/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Timothy Smith does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>“Publish or perish” is a well-known maxim within academia. It is introduced to researchers early in their careers, often by a PhD supervisor, keen for his or her students to start building a career. While…Timothy Smith, Honorary Fellow, Department of Pathology, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/144802013-05-21T01:51:22Z2013-05-21T01:51:22ZDo not resuscitate: the journal impact factor declared dead<figure><img src="https://images.theconversation.com/files/24170/original/9sz6nvvg-1369098854.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">It's time to let the journal impact factor die.</span> <span class="attribution"><span class="source">Ben McLeod</span></span></figcaption></figure><p>Science is a highly competitive business so measuring the impact of scientific research, meaningfully and objectively, is essential. The journal impact factor (<a href="http://www.sciencegateway.org/impact/">JIF</a>) has emerged over the past few decades as the most used scientific metric for the assessment of research quality. </p>
<p>As a research scientist, Medical Research Institute Director and former Editor-in-Chief of a scientific journal, I have to confess my own way too common use of the JIF, and to delight when that particular parameter fell in my favour. </p>
<p>But the truth is the JIF has major flaws. </p>
<p>There are better ways to gauge the impact of a piece of research, the quality of an individual researcher and even the quality of a peer-review journal. The <a href="http://am.ascb.org/dora/">San Francisco Declaration on Research Assessment</a> attempts to formally address the deficiencies in the JIF measurement and proposes the adoption of different practises in assessing quality of research publications. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/24174/original/gddsj8cq-1369100520.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">MervC</span></span>
</figcaption>
</figure>
<p>More on the declaration later - but first, some discussion of the JIF. What is it and what’s the problem with using it? </p>
<h2>Working the JIF</h2>
<p>Impact factors were <a href="http://jama.jamanetwork.com/article.aspx?articleid=202114">first devised</a> by American scientist <a href="http://www.garfield.library.upenn.edu/">Eugene Garfield</a> in 1955. The current JIF system is a measure of how frequently recently published papers from a particular journal are <a href="http://en.wikipedia.org/wiki/Citation">cited</a> (referenced in another body of work). </p>
<p>Hence, it is said to be a measure of the “impact” of the research published in that journal. </p>
<p>Technically, it is the number times in any given year that papers published in the previous two years were cited. For example, a journal with an impact factor of 10 in 2012 means the papers published in that journal in 2010 and 2011 were cited an average of 10 times each in 2012. </p>
<p>On the face of it, this should be a good measure of the scientific quality of a journal, but even in this regard the JIF has only limited value. </p>
<p>The JIF can be greatly skewed by an extraordinarily highly cited individual paper. It also does not take account of the different sizes of particular scientific disciplines nor the fact that review articles tend to get cited more often than primary research articles. </p>
<p>These and other deficiencies mean that the JIF is not only a blunt metric when it comes to assessing the quality of a journal but it is open to manipulation. Journals may decide to publish on certain topics or certain article types (such as reviews) to maximise their JIF.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=450&fit=crop&dpr=1 600w, https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=450&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=450&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=566&fit=crop&dpr=1 754w, https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=566&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/24167/original/3qzx9q68-1369098473.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=566&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Peter Kaminski</span></span>
</figcaption>
</figure>
<h2>So what are the options?</h2>
<p>While JIF may have some limited value in assessing journal quality, for the individual or individual piece of research the JIF is even less reliable. </p>
<p>In the context of peer-reviewed publications at least, what’s important is the frequency with which others cite that individual’s publication, preferably in a way that considers the variability of the size of different scientific fields. </p>
<p>One day, it even may prove to be significant to give “bonus marks” for individuals publishing highly cited papers in low impact journals. Such metrics are being developed and are becoming more commonplace. </p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=928&fit=crop&dpr=1 600w, https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=928&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=928&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1166&fit=crop&dpr=1 754w, https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1166&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/24173/original/9h6r28kv-1369100327.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1166&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption"></span>
<span class="attribution"><span class="source">Claire_Sambrook</span></span>
</figcaption>
</figure>
<p>One example is the <a href="http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1283832/"><em>h</em>-index</a>, proposed by physicist <a href="http://physics.ucsd.edu/%7Ejorge/jh.html">Jorge E. Hirsch</a> in 2005, which is a metric that links both the number of papers an author has published with the number of times those papers have been cited. While emerging metrics themselves have deficiencies (for example your <em>h</em>-index generally gets better as you get older!) they are part of an important trend.</p>
<h2>Looking beyond citations</h2>
<p>The Declaration on Research Assessment is signed by an impressive list of influential individual scientists and organisations as well as the editors-in-chief of many major journals, including Science. </p>
<p>To some extent the declaration states what many in research in this country already know and have begun applying to their judgements of individual researchers and of the quality of scientific studies. However, the declaration is probably that clear line in the sand that the scientific world needed in relation to JIFs.</p>
<p>It is the next development that will be the most interesting: a time where we look beyond simple publication metrics to judge more fully the impact of a piece of research. For example:</p>
<ul>
<li>what resources (online or otherwise) were produced as a result of the work?</li>
<li>how many people accessed these resources?</li>
<li>what impact did the work have on policy and practice?</li>
<li>what was the economic, social or environmental benefit of the work?</li>
</ul>
<p>Therefore, the significance of the San Francisco Declaration on Research Assessment needs to be seen not simply as announcing the death of the JIF, but also as a step along a pathway to a more enlightened method of assessing the impact of research.</p><img src="https://counter.theconversation.com/content/14480/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brendan Crabb receives funding from the National Health & Medical Research Council.</span></em></p>Science is a highly competitive business so measuring the impact of scientific research, meaningfully and objectively, is essential. The journal impact factor (JIF) has emerged over the past few decades…Brendan Crabb, President of the Association of Australian Medical Research Institutes and Director and CEO, Burnet InstituteLicensed as Creative Commons – attribution, no derivatives.