tag:theconversation.com,2011:/africa/topics/research-quality-23348/articlesResearch quality – The Conversation2024-02-23T13:50:45Ztag:theconversation.com,2011:article/2206352024-02-23T13:50:45Z2024-02-23T13:50:45ZEarly COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create<figure><img src="https://images.theconversation.com/files/577159/original/file-20240221-22-ttfzl.jpg?ixlib=rb-1.1.0&rect=0%2C0%2C2070%2C1449&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The pandemic spurred an increase in COVID-19 research, much of it with methodological holes.</span> <span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/coronavirus-damage-royalty-free-image/1266909460">Andriy Onufriyenko/Moment via Getty Images</a></span></figcaption></figure><p>Early in the COVID-19 pandemic, researchers <a href="https://doi.org/10.1038/d41586-020-03564-y">flooded journals</a> with studies about the then-novel coronavirus. Many publications streamlined the peer-review process for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.</p>
<p>However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public health journals listed in Google Scholar, I found that many of these studies used <a href="https://doi.org/10.1162/qss_a_00257">poor quality methods</a>. <a href="https://doi.org/10.1186/s12874-020-01190-w">Several other</a> <a href="https://doi.org/10.1038/s41467-021-21220-5">reviews of</a> <a href="https://doi.org/10.1371/journal.pone.0241826">studies published</a> in medical journals have also shown that much early COVID-19 research used poor research methods.</p>
<p>Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar <a href="https://doi.org/10.3390/ijerph17051729">used data</a> from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely chance occurrences. However, this study has been cited <a href="https://scholar.google.com/citations?hl=en&vq=med_publichealth&view_op=list_hcore&venue=kEa56xlDDN8J.2023">over 11,000 times</a>.</p>
<p>A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not <a href="https://doi.org/10.1089/ees.2016.0223">strongly linked to research quality</a>, since researchers and journals can game and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.</p>
<h2>Methodology matters</h2>
<p>I am a <a href="https://scholar.google.com/citations?user=X1o1PaQAAAAJ&hl=en">public health researcher</a> with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement <a href="https://theconversation.com/misinformation-is-a-common-thread-between-the-covid-19-and-hiv-aids-pandemics-with-deadly-consequences-187968">spreading misinformation</a> about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.</p>
<p>The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to provide valid and useful information. </p>
<p>For example, researchers have <a href="https://www.sfu.ca/%7Epalys/Campbell&Stanley-1959-Exptl&QuasiExptlDesignsForResearch.pdf">known for decades</a> that for studies evaluating the effectiveness of an intervention, a <a href="https://www.britannica.com/science/control-group">control group</a> is needed to know whether any observed effects can be attributed to the intervention. </p>
<p><a href="https://doi.org/10.1111/dmcn.15719">Systematic reviews</a> pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will cover all the available evidence and tell a reader which is worth attending to and which is not.</p>
<p>Certain types of studies, such as one-time surveys of convenience samples that aren’t representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable <a href="https://doi.org/10.1017/S0033291720005127">caused a particular outcome</a>.</p>
<figure>
<iframe width="440" height="260" src="https://www.youtube.com/embed/WUErib-fXV0?wmode=transparent&start=0" frameborder="0" allowfullscreen=""></iframe>
<figcaption><span class="caption">Systematic reviews involve thoroughly identifying and extracting information from existing research.</span></figcaption>
</figure>
<p>All <a href="https://www.equator-network.org/">study designs have standards</a> that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.</p>
<p><a href="https://doi.org/10.1038/s41467-021-21220-5">Studies comparing</a> <a href="https://doi.org/10.1186/s12916-021-01920-x">COVID-19 papers</a> <a href="https://doi.org/10.1371/journal.pone.0241826">with non-COVID-19</a> papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to reporting standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against <a href="https://doi.org/10.1136/bmjebm-2020-111584">dredging the data</a> to find “statistically significant” results that could be selectively reported.</p>
<p>Such methodological problems were likely overlooked in the <a href="https://doi.org/10.1038/s41562-020-0911-0">considerably shortened</a> <a href="https://doi.org/10.1162/qss_a_00076">peer-review process</a> for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be <a href="https://doi.org/10.1038/s41467-021-21220-5">13 days, compared with 110 days</a> in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of <a href="https://doi.org/10.1162/qss_a_00257">about three weeks</a>.</p>
<h2>Publish-or-perish culture</h2>
<p>These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.</p>
<p>Journals tend to favor <a href="https://doi.org/10.1371/journal.pone.0010068">positive, “novel” findings</a>: that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an opportunity for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Person with head in hands, elbows planted on stacks of paperwork and books littering a desk, glasses and laptop on the side" src="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577161/original/file-20240221-26-tv7gdq.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Many researchers feel pressure to publish papers in order to advance their careers.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/surrounded-by-work-royalty-free-image/637293916">South_agency/E+ via Getty Images</a></span>
</figcaption>
</figure>
<p>Academics have worked in a <a href="https://doi.org/10.1089/ees.2016.0223">publish-or-perish</a> <a href="https://doi.org/10.1177/1745691612459058">incentive system</a> for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The <a href="https://theconversation.com/misinformation-is-a-common-thread-between-the-covid-19-and-hiv-aids-pandemics-with-deadly-consequences-187968">flood of mixed-quality COVID-19 information</a> afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.</p>
<p>Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of <a href="https://doi.org/10.1002/leap.1566">today’s online</a> <a href="https://doi.org/10.1001/jama.2023.3212">mega-journals</a> publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.</p>
<h2>Healthy criticism</h2>
<p>Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an <a href="https://doi.org/10.1057/palgrave.jors.2602573">open society</a> capable of solving complex problems such as a worldwide pandemic.</p>
<p>Publishing a large volume of misinformation disguised as science during a pandemic <a href="https://doi.org/10.1073/pnas.1912444117">obscures true and useful knowledge</a>. At worst, this can lead to bad public health practice and policy. </p>
<p>Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves <a href="https://doi.org/10.1371/journal.pmed.1001747">critically examining the quality</a> of a study’s designs, statistical methods, reproducibility and transparency, not the <a href="https://doi.org/10.1016/j.jclinepi.2021.05.018">number of times it has been cited</a> or tweeted about.</p>
<p>Science depends on a <a href="https://doi.org/10.1007/s10654-023-01049-6">slow, thoughtful and meticulous approach</a> to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="Two scientists pipetting liquids under a fume hood, with another scientist in the background examining a sample" src="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=423&fit=crop&dpr=1 600w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=423&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=423&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=532&fit=crop&dpr=1 754w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=532&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/577167/original/file-20240221-22-hmviem.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=532&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Rigorous science requires careful deliberation and attention, not haste.</span>
<span class="attribution"><a class="source" href="https://www.gettyimages.com/detail/photo/female-scientist-drops-liquid-into-test-tube-royalty-free-image/127871289">Assembly/Stone via Getty Images</a></span>
</figcaption>
</figure>
<p>Public health heavily draws upon disciplines that are <a href="https://doi.org/10.1038/526182a">experiencing</a> <a href="https://doi.org/10.1177/1745691612462588">replication</a> <a href="https://doi.org/10.1371/journal.pmed.0020124">crises</a>, such as psychology, biomedical science and biology. It is similar to these disciplines <a href="https://doi.org/10.1146/annurev-statistics-031219-041104">in terms of its</a> incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.</p>
<p>Reexamining how the discipline rewards its scholars and assesses their scholarship can help it better prepare for the next public health crisis.</p><img src="https://counter.theconversation.com/content/220635/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Dennis M. Gorman does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Pressure to ‘publish or perish’ and get results out as quickly as possible has led to weak study designs and shortened peer-review processes.Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2195962024-02-01T19:03:42Z2024-02-01T19:03:42ZConsulting firms provided low-quality research on crucial water policies. It shows we have a deeper problem<figure><img src="https://images.theconversation.com/files/572682/original/file-20240201-17-j9u2l0.jpg?ixlib=rb-1.1.0&rect=28%2C17%2C3805%2C2138&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/agriculture-irrigation-silhouette-farmer-tablet-walks-2330622729">maxim ibragimov, Shutterstock</a></span></figcaption></figure><p>Management <a href="https://www.ibisworld.com/au/industry/management-consulting/1896/#IndustryStatisticsAndTrends">consulting revenue</a> in Australia has grown from less than A$33 billion in 2010 to more than $47 billion in 2023. The increasing use of consultants, as well as the <a href="https://theconversation.com/beyond-the-pwc-scandal-theres-a-growing-case-for-a-royal-commission-into-australias-ruthless-corporate-greed-214474">PwC scandal</a>, highlights serious issues with vested interests, integrity and <a href="https://www.theguardian.com/commentisfree/2023/may/18/why-does-australia-rely-on-consulting-firms-such-as-pwc-and-not-on-its-own-public-servants">transparency</a>. </p>
<p>Consequently, a <a href="https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Finance_and_Public_Administration/Consultingservices">Senate inquiry</a> is investigating the management and integrity of consulting services. The deadline for the Senate committee’s final report has been extended twice, partly due to the <a href="https://www.themandarin.com.au/236738-the-big-fours-revelations-in-senate-estimates/">various revelations</a>, to March 28. So far, all the big consulting groups in Australia have appeared before the committee. </p>
<p>Our <a href="https://www.sciencedirect.com/science/article/pii/S1462901123003039">recent review</a> of research in the Murray-Darling Basin points to other serious concerns about the use of consulting studies, which are increasingly relied upon for policy-making, especially in water. Of the studies we examined, 65 were on the economic consequences of water recovery. Almost half of these were low-quality studies, mainly from consultancies but also by think tanks and government departments. The low-quality studies were more likely to overestimate negative impacts on the economy and community from buying water back for the environment. </p>
<p>Unfortunately, these poor-quality studies were used to justify changes to water policy. Buying back water rights from “willing sellers” is a <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/1467-8489.12001">cost-effective way</a> to redistribute water entitlements. But buybacks were halted under the former Coalition government. The policy <a href="https://minister.dcceew.gov.au/plibersek/media-releases/getting-straight-work-restore-murray-darling-rivers">will now be restored</a> under Labor in the form of “voluntary water purchases”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/water-buybacks-are-back-on-the-table-in-the-murray-darling-basin-heres-a-refresher-on-how-they-work-200529">Water buybacks are back on the table in the Murray-Darling Basin. Here's a refresher on how they work</a>
</strong>
</em>
</p>
<hr>
<h2>Contested research into water buybacks</h2>
<p>The $13 billion basin plan seeks to improve the health of our nation’s largest river system by returning water from irrigation to the environment. </p>
<p>But such water reallocation has been blamed for <a href="https://www.theguardian.com/australia-news/2023/oct/18/murray-darling-basin-water-buyback-plan-farmers-claim-rural-job-losses">huge job losses, reductions in irrigated production and consequently, economic decline in rural towns</a>. </p>
<p>There are many groups with different interests in the basin. Research results are often contested.</p>
<p>To provide an objective assessment and comparison of the quality of basin water economic study results, we developed and applied a <a href="https://www.sciencedirect.com/science/article/pii/S1462901123003039">new economic quality assessment framework</a>. This was inspired by health research, which has long applied grading systems to ensure robustness in research findings (such as <a href="https://www.nhmrc.gov.au/guidelinesforguidelines/develop/assessing-certainty-evidence">the Grading of Recommendations Assessment, Development and Evaluation</a>. </p>
<p>Our framework enables studies to be classified as low, medium or high quality, to suggest how robust each study’s results may be. </p>
<p>Nearly half (45 per cent) of the 65 water recovery studies in our review were classified as low quality. These low quality studies were much more likely to suggest large negative impacts on economic values from water recovery than higher quality studies. They were also more likely to be consulting studies. </p>
<p>The high quality studies (26 per cent) were peer-reviewed, employed sophisticated modelling and extensive analysis. The estimated impact of water recovery ranged from none to small or modest. None of these studies were funded by industry. </p>
<h2>Why is there such a difference in results?</h2>
<p>The method used in each study is a major factor determining research quality. Consultants often rely on simple methods such as “input-output modelling” or “multipliers” to assess economic impact. These are models that often rely upon simplistic assumptions and links within sectors in the economy to predict changes in job numbers or production. These models are not able to consider all possible influences of change. </p>
<p>Input-output modelling is <a href="https://www.abs.gov.au/statistics/detailed-methodology-information/concepts-sources-methods/australian-system-national-accounts-concepts-sources-and-methods/2020-21/chapter-22-input-output-tables/using-i-o-tables-analysis">heavily criticised as inappropriate by the Australian Bureau of Statistics</a> and many treasury departments. Given this modelling is used across many areas and subjects within Australia to illustrate “economic impact”, its use and application needs greater scrutiny. </p>
<p>Higher quality studies use methods that allow for dynamic feedback and adaptation. They also account for other factors that influence outcomes such as climate or prices. As a result, higher quality studies in our review do not find anywhere near the same large decrease in jobs or economic impact from reduced water extraction. </p>
<p>For example, some feedbacks that can occur when farmers sell water include that the money is reinvested on the farm, increasing profits, or that the farm switches from irrigated to dryland agriculture, so production continues. Alternatively water recovery may increase community welfare through an improved environment, or better downstream water conditions for other farmers. Simplistic modelling approaches often ignore these other benefits.</p>
<p>Our <a href="https://www.sciencedirect.com/science/article/pii/S1462901123003039?via%3Dihub">review</a> also indicated a relative lack of study in the basin on other downstream and Indigenous benefits and costs, as well as a need to pay closer attention to transition and adjustment issues within some small irrigation-intensive communities. </p>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1732719004492439838"}"></div></p>
<h2>We need quality standards for water research</h2>
<p>Basin communities will increasingly need to adapt and adjust as the climate changes. We need better ways to cope with such transitions, especially in the face of future upheavals from drought and extreme weather events.</p>
<p>Hopefully the recently released funding and other <a href="https://consult.dcceew.gov.au/draft-restoring-our-rivers-framework">support for communities</a> announced in the amended <a href="https://www.dcceew.gov.au/water/policy/restoring-our-rivers-act">water law</a> will help communities adjust to the reallocation of water. To date, such funds have <a href="https://www.pc.gov.au/inquiries/completed/basin-plan/report">not been allocated</a> to areas most in need.</p>
<p>The negative socio-economic impacts predicted by low-quality studies are often used to justify changed water policies. We, along with other <a href="https://www.pc.gov.au/__data/assets/pdf_file/0007/369142/sub104-basin-plan-2023.pdf">water economic professors</a>, are calling for greater quality standards when it comes to government-funded research into the affects of water reallocation. The government is now <a href="https://storage.googleapis.com/files-au-climate/climate-au/p/prj2a8f4464525d140f6d670/public_assets/Draft450Framework.pdf">required</a> to update the impact analysis for the basin plan. It is essential that any assessment of impact is robust and defensible, following strict quality standards.</p>
<p>These quality standards could also be applied widely, across a variety of policies and areas. Although high quality research is difficult and takes time, relying on inadequate research can have serious consequences. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/suicide-rates-increased-after-extreme-drought-in-the-murray-darling-basin-we-have-to-do-better-as-climate-change-intensifies-211107">Suicide rates increased after extreme drought in the Murray-Darling Basin – we have to do better as climate change intensifies</a>
</strong>
</em>
</p>
<hr>
<img src="https://counter.theconversation.com/content/219596/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>An Australian Research Council discovery grant and the Murray-Darling Basin Authority provided funding for this research.</span></em></p><p class="fine-print"><em><span>Alec Zuo receives funding from an Australian Research Council discovery grant and the Murray-Darling Basin Authority provided funding for this research.</span></em></p>A comprehensive review of research into the economic consequences of controversial water buybacks in the Murray-Darling Basin reveals many studies are of poor quality. Better standards are needed.Sarah Ann Wheeler, Professor in Water Economics, University of AdelaideAlec Zuo, Associate Professor, School of Economics and Public Policy, University of AdelaideYing Xu, Research Fellow, School of Economics and Public Policy, University of AdelaideLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/2175412023-11-14T19:07:26Z2023-11-14T19:07:26Z‘You only assess what you care about’: a new report looks at how we assess research in Australia<figure><img src="https://images.theconversation.com/files/559197/original/file-20231113-17-hzq74f.jpg?ixlib=rb-1.1.0&rect=23%2C35%2C7880%2C5214&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">
</span> <span class="attribution"><a class="source" href="https://www.pexels.com/photo/photo-of-female-engineer-looking-through-wires-3862623/">ThisIsEngineering/Pexels</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>Research plays a pivotal role in society. Through research, we gain new understandings, test theories and make discoveries. </p>
<p>It also has a huge economic value. In 2021, the <a href="https://www.csiro.au/en/work-with-us/services/consultancy-strategic-advice-services/CSIRO-futures/Innovation-Business-Growth/Quantifying-Australias-returns-to-innovation">CSIRO found</a> every A$1 of research and development investment in Australia creates an average of $3.50 in economy-wide benefits. </p>
<p>But how do we know if individual research projects being conducted in Australia are good quality? How is research recognised? The key way this happens is through “research assessment”. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/tumult-and-transformation-the-story-of-australian-universities-over-the-past-30-years-215536">Tumult and transformation: the story of Australian universities over the past 30 years</a>
</strong>
</em>
</p>
<hr>
<h2>What is research assessment?</h2>
<p>Research assessment is not a centralised or necessarily formal process. It can involve various processes and measures to evaluate the performance of individual researchers and research institutions. This includes assessing the quality, excellence and impact of various outputs. </p>
<p>Research assessment can be qualitative or quantitative. It can include publications in journals and the number of people who cite the research, gaining grants to do further research, commercialisation, media engagement and impact on decision-making or public policy, prizes and invitations to speak at conferences. </p>
<p>If research assessment is working fairly and effectively, it should achieve several things. This includes: helping to develop researchers’ careers, making sure innovative research does not get avoided in favour of short-term gains and helping funders and the community have confidence research is providing value for money and adding to the public good. </p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/we-solve-problems-in-30-days-through-research-sprints-other-academics-can-do-this-too-204373">We solve problems in 30 days through 'research sprints': other academics can do this too</a>
</strong>
</em>
</p>
<hr>
<h2>Our project</h2>
<p>Our new project aimed to provide a better understanding of how research assessment affects research in Australia. </p>
<p>In a <a href="https://acola.org/wp-content/uploads/2023/11/ACOLA_ResearchAssessment_FINAL.pdf">report released today</a>, we surveyed more than 1,000 Australian researchers and more than 50 research organisations. </p>
<p>This included universities, research institutes, industry bodies, government and not-for-profit organisations. The majority of researchers (74%) were in academic roles. Across those research sectors, we also conducted 11 roundtables involving around 120 people and 25 intensive interviews to understand the issues.</p>
<p>This work was commissioned by Chief Scientist Cathy Foley and conducted by the Australian Council of Learned Academies (involving the academies of science, medical science, engineering and technological sciences, social sciences and humanities). </p>
<p>It also comes as the <a href="https://www.education.gov.au/australian-universities-accord">Universities Accord review</a> examines how research is funded and approached within higher education. </p>
<figure class="align-center ">
<img alt="A young man searches the shelves of a library." src="https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/559200/original/file-20231114-21-9oaq62.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">Research assessment should help to develop researchers’ careers.</span>
<span class="attribution"><a class="source" href="https://www.pexels.com/photo/male-student-searching-at-book-shelves-6549376/">Tima Miroshnichenko/Pexels</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<h2>What we found</h2>
<p>We found some difficulties with the current approach to research assessment. </p>
<p>We heard there is a tendency by some researchers to “play it safe” in terms of doing research they believe will score well. We also heard how the assessment process can unintentionally exclude or devalue particular forms of knowledge, particularly in the humanities and the social sciences, where outputs can be less easily quantified or less immediately seen.</p>
<p>As one interviewee said: </p>
<blockquote>
<p>What is assessed and how it is assessed are an indication of what the
organisation values. You only assess what you care about. Values and
culture drive assessment.</p>
</blockquote>
<p>Our roundtables told us senior staff and supervisors are often seen to reinforce the culture of “publish or perish”, with the number of articles being valued more highly than the quality. </p>
<p>We heard early and mid-career researchers and people from underrepresented backgrounds can have difficulties trying to “play the game” to advance their careers. For example, early-career researchers are often expected to produce work that benefits their larger team, at a cost to their own capacity for promotion. </p>
<p>As one interviewee noted: </p>
<blockquote>
<p>Metrics are essential for defining value and comparative difference, but
Australia requires a modern and fair framework for assessing our current
and next generation of researchers.</p>
</blockquote>
<p><div data-react-class="Tweet" data-react-props="{"tweetId":"1650983340751613952"}"></div></p>
<h2>Survey results</h2>
<p>Our survey found a high level of dissatisfaction with the state of research assessment. This included: </p>
<ul>
<li><p>73% of respondents agreed assessment processes are not consistently or
equitably applied across disciplines, in particular between the humanities and the sciences </p></li>
<li><p>67% said there are not enough opportunities to provide input into research assessment practices</p></li>
<li><p>70% said assessments took up unreasonable time and effort. </p></li>
</ul>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/fieldwork-can-be-challenging-for-female-scientists-here-are-5-ways-to-make-it-better-214215">Fieldwork can be challenging for female scientists. Here are 5 ways to make it better</a>
</strong>
</em>
</p>
<hr>
<h2>The way forward</h2>
<p>In our survey, we asked “What is one specific change you would
recommend to improve current research assessment processes?”.</p>
<p>Respondents wanted to see a shift towards quality over quantity. This means not just a focus on publishing as many papers as possible, but supporting research that may take longer for its value and benefits to emerge. </p>
<p>They wanted interdisciplinary research to be promoted and rewarded, because many of the complex problems of our world – from climate change to domestic violence to housing affordability – require multiple disciplines to be involved in finding solutions. In the same vein, they also wanted collaboration and team work to be rewarded more clearly and transparently. </p>
<p>They wanted less bias towards STEM (science, technology, engineering and maths) research and more promotion of diversity and of early-career researchers. This included better understanding of their personal and cultural situation, more focused career development and better managed teamwork.</p>
<p>To achieve all of this, and more, we will also need to understand that no single measure can assess all research or researchers. So, several tools will be needed, including quantitative indicators as well as qualitative measures and peer review.</p>
<hr>
<p><em>Ana Deletic, Louisa Jorm, Duncan Ivison, Robyn Owens, Jill Blackmore, Adrian Barnett, Kate Thomann, Caroline Hughes, Andrew Peele, Guy Boggs and Raffaella Demichelis were all part of the expert working group supporting this work.</em></p><img src="https://counter.theconversation.com/content/217541/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Kevin McConkey has previously received funding from the Australian Research Council. He is the current chair of the Policy Committee of the Academy of the Social Sciences in Australia. He is the chair of the Expert Working Group of the the Australian Council of Learned Academies, which prepared the report referred to in this article.</span></em></p>The project, spanning researchers across science and the humanities, looks at how ‘research assessment’ affects research in Australia.Kevin McConkey, Emeritus Professor , UNSW SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/1260232019-11-07T01:55:20Z2019-11-07T01:55:20ZAnalysis: Indonesian policymaking is not supported by quality research and academic freedom<p>Indonesian President Joko “Jokowi” Widodo recently picked his cabinet ministers. They will be expected to drive policymaking and implementation in the next five years to tackle the complex problems affecting the nation of more than a quarter-billion people. </p>
<p>Policymaking sounds like a big word, and it is. Government policies determine how they deliver their programs and services that affect everyday lives.</p>
<p>And Indonesia needs good policies. It’s at a critical juncture. </p>
<p><a href="http://ejournal.lipi.go.id/index.php/jmiipsk/article/view/626">Until 2030, Indonesia will have more people of productive age</a> than children and older people. But, without good policies, the country might miss this window of opportunity. It might turn old before it becomes rich. </p>
<p>To succeed in delivering programs that help eliminate poverty, ensure people are fed nutritious food, have quality education, are resilient to natural disasters and respectful of diversity, among others, the government must base policies on academically sound evidence. </p>
<p>But our study, <a href="http://www.gdn.int/doing-research-assessment">Doing Research Assessment</a>, shows Indonesian policymaking is predominantly informed by research with poor theoretical engagement, with no strong tradition of peer review and with legal threats to academic freedom. </p>
<h2>Connection between research and policymaking</h2>
<p>In the study, we implemented a three-step methodology. First, we did an overall assessment of the economic, political, historical and regional context. Second, we mapped national research actors. Finally, we surveyed 102 respondents: researchers (33.3%), research administrators (39.3%) and policymakers (27.4%). </p>
<p>The respondents represent organisations that produce or use social sciences. They come from government and funding organisations, civil society organisations, higher education institutions, and private think tanks. </p>
<iframe title="" aria-label="Interactive donut chart" id="datawrapper-chart-COUPI" src="https://datawrapper.dwcdn.net/COUPI/2/" scrolling="no" frameborder="0" style="width: 0; min-width: 100% !important; border: none;" height="650" width="100%"></iframe>
<p>Our study shows that there is a good connection between people and institutions in the social research sector with policymakers. </p>
<p>A majority of researchers (66.7%) have received government requests for expert advice on the social aspects of policy development. Significantly, a majority of research organisation (68.3%) have worked on research commissioned directly by the government. And 93.5% of researchers have been a member of a policy advisory board at a central level over the last three years. </p>
<p>The majority of policymakers (92.9%) also claim that they benefit from research products such as scientific papers, working papers, presentation slides and position papers. </p>
<p>But this connection between the social research sector and policymakers is not accompanied by high-quality and academically rigorous research through peer review and academic collaboration.</p>
<p>Some 76.5% of researchers received less than two weeks of capacity building, such as research-related and publication training, in the past three years. Some 43.8% have not published in peer-reviewed scientific journals and 57.6% are not members of a professional research network. </p>
<p>Moreover, 60.6% collaborated in their research with individuals outside their home institution less than four times, while 61.5% of organisations have not hosted public debates related to research. It takes more intensive and frequent meetings and collaborations to build academic rigour and excellence.</p>
<p>The questionable link between social science research and policymaking exists in a research ecosystem with low government support. </p>
<p>The Indonesian government does not spend enough on basic research. As a result, universities take on commissioned research to generate income. </p>
<p>The government spends <a href="https://www.ksi-indonesia.org/in/news/detail/kinerja-riset-ilmu-sosial-indonesia-masih-rendah">around 0.2% of its GDP on research</a>, ten times lower than other countries in the region. Even though it increased from 0.09% in 2013 to 0.25% of GDP in 2016, it is still well below Singapore (2.2% of GDP), Malaysia (1.3%), Thailand (0.6%) and even Vietnam (0.4%). </p>
<h2>Independent research in democracy</h2>
<p>In Indonesia, there is little room for progressive and critical academic discourses to exist, which is a pre-requisite for the use of evidence in policymaking. </p>
<p>Social sciences have experienced a long history of repression in Indonesia and have often been used as a tool to serve the <a href="https://dfat.gov.au/about-us/publications/Documents/indo-ks-design.pdf">interests of the elite</a>. </p>
<p>In the 18th century, the Dutch colonial government controlled science and research development by employing scholars and scientists as <a href="https://www.jstor.org/stable/27751556?seq=1#page_scan_tab_contents">full-time bureaucrats</a>. </p>
<p>From 1965 to 1998, the authoritarian New Order administration used social sciences to <a href="https://books.google.co.id/books/about/Social_Science_and_Power_in_Indonesia.html?id=WM3_ulRJFlkC&redir_esc=y">justify state policies</a>. </p>
<p>While direct government control over social research has lessened following the fall of the New Order, other imperatives are at work in limiting the kinds of social issues that can be researched. </p>
<p>Since the mid-2000s, social research themes have been submitted to the demands of the market. As they have become income sources for private and state universities, research is dictated by what can be <a href="https://www.tandfonline.com/doi/full/10.1080/00472336.2019.1627389">sold</a> to the political, the private, the government, or the donor markets. </p>
<p>Ensuring the academic freedom of social scientists means they can both <a href="https://www.economist.com/asia/2018/06/21/why-indonesia-is-so-bad-at-lawmaking">strengthen and question government policies via criticism</a></p>
<p>But about 48.3% of our respondents experience undue influence from policymakers while doing their research. For example, many academic discussions has been <a href="https://tirto.id/menristekdikti-bukan-pawang-mahasiswa-eiXG">disbanded</a>, much of them after 2018, before the election year. </p>
<p>In 2019, survey data have also been used to justify the electability of political candidates. Different election polling agencies can produce <a href="https://www.scmp.com/news/asia/southeast-asia/article/3006562/indonesia-election-jokowi-takes-lead-over-prabowo-subianto">starkly different numbers</a> – one camp declared it was leading by 8 to 9 points, while the other claimed it had won 62% of the vote. </p>
<p>This demonstrates how “evidence” can be tailored for political purposes. </p>
<p>The appointment of former presidential candidate Prabowo Subianto, an ex-military general accused of <a href="https://www.theguardian.com/world/2019/oct/23/indonesia-joko-widodo-appoints-arch-rival-as-defence-minister-prabowo-subianto">human rights abuse</a>, as Jokowi’s defence minister also shows the competition between presidential candidates was less a reflection of a thriving democracy and more of an oligarchic consolidation. </p>
<p>In Indonesia, without proof that academic rigour is present, any claim of evidence-based policymaking must be treated with caution. </p>
<p>This superficial connection puts good policymaking at risk. Despite researchers bringing “evidence”, they are vulnerable to becoming stamps to legitimise policies without properly assessing their value and impact. </p>
<p>Only by ensuring that academic rigour is present, and the independence of social scientists is non-negotiable, can we hope for a meaningful connection between academics and policymakers. </p>
<p>Without this, the poor imagination of Indonesian social scientists and their low presence in international academic and public debates on the global future of democracy will keep them as instruments for <a href="https://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7531.pdf">elite interests</a>.</p>
<hr>
<p><em>The authors would like to thank the critical insight Dr. Herlambang Wiratraman, socio-legal scholar from Universitas Airlangga, Indonesia, has provided throughout the research.</em></p><img src="https://counter.theconversation.com/content/126023/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Inaya Rakhmani menerima dana dari Global Development Network. Inaya terafiliasi dengan Akademi Ilmuwan Muda Indonesia.</span></em></p><p class="fine-print"><em><span>Zulfa Sakhiyya menerima dana dari Global Development Network.</span></em></p>Indonesian policymaking is predominantly informed by research with poor theoretical engagement, with no strong tradition of peer review and with legal threats to academic freedom.Inaya Rakhmani, Assistant Professor at the Faculty of Social and Political Sciences, Universitas Indonesia, Universitas IndonesiaZulfa Sakhiyya, Assistant Professor at the Faculty of Languages and Arts, Universitas Negeri Semarang., Universitas Negeri SemarangLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/933222018-06-05T10:46:58Z2018-06-05T10:46:58ZWith federal funding for science on the decline, what’s the role of a profit motive in research?<figure><img src="https://images.theconversation.com/files/221412/original/file-20180601-142069-1d17td4.jpg?ixlib=rb-1.1.0&rect=318%2C661%2C4572%2C3163&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Money doesn't grow in flasks – scientists have to find funds outside the lab.</span> <span class="attribution"><a class="source" href="https://unsplash.com/photos/UmncJq4KPcA">chuttersnap/Unsplash</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span></figcaption></figure><p>What is the place of a profit motive in the production of knowledge at public universities?</p>
<p>The Trump administration’s initial budget request presented in 2017 offered one answer to that question. According to the American Association for the Advancement of Science, the budget proposal included a <a href="https://www.aaas.org/page/fy-2018-rd-appropriations-dashboard">17 percent reduction in funding for basic research</a>. Proposed cuts to particular agencies and programs within them, such as research on <a href="https://www.nature.com/polopoly_fs/1.22036.1496251823!/menu/main/topColumns/topLeftColumn/pdf/nature.2017.22036.pdf?origin=ppub">basic energy sciences at the Department of Energy</a>, were particularly acute. And while <a href="https://www.theatlantic.com/science/archive/2018/03/trump-science-budget/556229/">Congress intervened</a> to avoid these cuts, the current funding package is nevertheless part of a long-term trend of reduced federal commitment to science. </p>
<p><iframe id="Amo48" class="tc-infographic-datawrapper" src="https://datawrapper.dwcdn.net/Amo48/3/" height="400px" width="100%" style="border: none" frameborder="0"></iframe></p>
<p>Proposed and actual funding conveys a recurring message to American academic scientists: do more to attract money from other sources. In most instances, this means industry funding.</p>
<p>On the face of it, partnerships between academia and industry in the production of knowledge are both sensible and critical. Given sluggish economic growth and the prevalence of societal problems that require technological solutions, one might argue that universities should be extensively engaged in contributing to innovation and less concerned with research lacking an apparent connection to real-world impact. Why spend time and money on studying the mating habits of Japanese quail when there are problems like Alzheimer’s disease and excessive reliance on non-renewable fossil fuels that urgently need solutions right now? </p>
<p>Yet many critics argue that a profit motive in science creates a scenario in which scientists place their values and potential personal gain ahead of the public good, resulting in <a href="https://mobile.nytimes.com/2015/09/06/us/food-industry-enlisted-academics-in-gmo-lobbying-war-emails-show.html">bias and conflicts of interest</a>. Whether you are concerned about the advancement of science, economic innovation, or both, it’s worth considering the value and appropriateness of partnerships between academic scientists and the corporate sector.</p>
<p>What do researchers themselves think? I’ve spent more than a decade sitting down with hundreds of scientists around the world for in-depth conversations about their work. In my recent book, “<a href="https://jhupbooks.press.jhu.edu/content/fractured-profession">A Fractured Profession: Commercialism and Conflict in Academic Science</a>,” I examine how scientists experience the rise of commercialism in academic science. These researchers shared views with me that don’t necessarily fall neatly in line with either those who celebrate a profit motive in science nor those who lament it.</p>
<h2>What actually motivates scientists?</h2>
<p>Even if university administrators and federal officials reward profitable science, the scientists I spoke with say that profits are rarely their motivation. Commercialist scientists in academia certainly do not dismiss the importance of revenues or resources for research, but societal impact and the pursuit of status in science were more highly prized by the scientists in my study. Being able to claim that you reduced the cost of making a vaccine to less than the cost of the bottle in which it is stored, for example, is a new way to stand out at a university where most scientists are publishing in the top journals in their field. In this respect, self-interest – generating money and prestige – can coincide with the public good.</p>
<p>Perhaps more importantly to those who think that universities should operate even more like businesses <a href="https://jhupbooks.press.jhu.edu/content/academic-capitalism-and-new-economy">than they already do</a>, scholars are finding that average rates of return from commercialization — even at universities with the highest licensing income — <a href="https://www.kauffman.org/what-we-do/research/2011/06/rules-for-growth-promoting-innovation-and-growth-through-legal-reform">are relatively low</a>. In the same way that relatively few universities benefit considerably from big-time college sports, relatively few universities — typically those that are rich already — actually produce blockbusters that lead to financial windfalls. </p>
<p>Unlike some commentators and <a href="https://theconversation.com/people-dont-trust-scientific-research-when-companies-are-involved-76848">members of the public</a>, most of the scientists I spoke with are relatively unconcerned with <a href="https://rowman.com/ISBN/9780742543713/Science-in-the-Private-Interest-Has-the-Lure-of-Profits-Corrupted-Biomedical-Research-">conflicts of interest and bias</a> in commercially oriented research. In their view, peer review mitigates such questions. Even if a scientist stands to gain financially from the outcomes of her research, if an invention is not scientifically sound, researchers contend it would have little chance of success in the market.</p>
<p>The traditional scientists in academia I spoke with reported <a href="https://theconversation.com/rather-than-being-free-of-values-good-science-is-transparent-about-them-84946">two chief values</a>: support for curiosity-driven research and a long-term vision of the technological fruits of scientific research. Traditionalists are still the majority, but they encounter scarce resources for basic research and increasing pressure to connect their work to concrete societal impacts. In the words of one scientist, much of what scientists understand about cancer stems from work based on Nobel Prize-winning biologist Lee Hartwell’s curiosity-driven research on how yeast cells divide. “If he had to apply his research, he probably would have had to work for Budweiser,” he said.</p>
<h2>Investing in a mix of sorts of science</h2>
<p>What should be the role of the state and the market in the production of knowledge in the American research university? Both are critical.</p>
<p>History shows there’s an intrinsic value to letting people explore, because such <a href="https://theconversation.com/tracing-the-links-between-basic-research-and-real-world-applications-82198">exploration is critical to later marketplace innovations</a> and economic prosperity. Today’s multi-billion-dollar global positioning system industries rely on Einstein’s general theory of relativity and ideas from 19th-century geometry, the latter of which were dismissed by contemporaries as useless. Other technologies, such as Teflon, saccharine and the pacemaker, were accidental creations. While corporations once valued having internal basic science laboratories where exploratory or “blue-sky” research took place, now the U.S government is the chief, and under-resourced, patron for this important work.</p>
<p>Few universities generate vast commercial returns from commercially oriented research. As a society, we must therefore be cautious in how eagerly we unleash the forces of the market in funding science in academia. Similar experiments in substituting the market for the state in <a href="https://www.nytimes.com/2017/09/05/magazine/michigan-gambled-on-charter-schools-its-children-lost.html">primary schooling</a>, <a href="https://www.nytimes.com/2018/04/10/us/private-prisons-escapes-riots.html">prisons</a> and <a href="https://www.brookings.edu/articles/outsourcing-war/">the military</a> have not clearly paid off. </p>
<p>Much as a diversified investment portfolio includes various assets that balance returns and risk, society would benefit most from a healthy mix of investment in curiosity-driven, use-inspired and highly market-oriented research in academia.</p>
<p>Until scientists can better articulate why science is as worthy of investment as any other form of infrastructure, they will likely continue to encounter the message delivered today: look to the market.</p>
<p>
<section class="inline-content">
<img src="https://images.theconversation.com/files/248894/original/file-20181204-133095-1p2xxs2.png?w=128&h=128">
<div>
<header>David R. Johnson is the author of:</header>
<p><a href="https://jhupbooks.press.jhu.edu/content/fractured-profession">A Fractured Profession: Commercialism and Conflict in Academic Science</a></p>
<footer>Johns Hopkins University Press provides funding as a member of The Conversation US.</footer>
</div>
</section>
</p><img src="https://counter.theconversation.com/content/93322/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>This research was funded by the National Science Foundation Grant #0957033 “A New Reward System in Academic Science.”
Johns Hopkins University Press provides funding as a member of The Conversation US.</span></em></p>Money always seems tight for university scientists. A sociologist conducted hundreds of interviews to see how they think about funding sources and profit motives for basic and applied research.David R. Johnson, Assistant Professor of Higher Education, University of Nevada, RenoLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/872522017-11-28T03:53:16Z2017-11-28T03:53:16ZStarting next year, universities have to prove their research has real-world impact<figure><img src="https://images.theconversation.com/files/194918/original/file-20171116-19768-8scze5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">For research to have an impact, it needs to be used or applied in some way.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Starting in 2018, Australian universities will be required to prove their research provides concrete benefits for taxpayers and the government, who fund it.</p>
<p>Education Minister Simon Birmingham recently <a href="https://ministers.education.gov.au/birmingham/focusing-research-make-difference">announced</a> the Australian Research Council (<a href="http://www.arc.gov.au/">ARC</a>) will introduce an <a href="http://www.arc.gov.au/engagement-and-impact-assessment">Engagement and Impact Assessment</a>. It will run alongside the current Excellence in Research Australia <a href="http://www.arc.gov.au/excellence-research-australia">ERA</a> assessment exercise. This follows a <a href="http://www.arc.gov.au/ei-pilot-overview">pilot</a> of the Engagement and Impact Assessment, run in 2017.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/pilot-study-on-why-academics-should-engage-with-others-in-the-community-76707">Pilot study on why academics should engage with others in the community</a>
</strong>
</em>
</p>
<hr>
<p>Until now, research performance assessment has mostly been focused on the number of publications, citations and competitive grants won. This new metric changes the focus from inputs and outputs to outcomes. This is part of a continuing shift from quantity to quality, which began in <a href="http://www.ams.org/journals/notices/201103/rtx110300434p.pdf">earlier iterations</a> of the ERA. The Engagement and Impact assessment reflects a significant change in thinking about the types of research impact we value and why. </p>
<p>For research to have an impact, it needs to be used or applied in some way. For example, health research aims to have an impact on health outcomes. For that to happen doctors, nurses and people working in health policy would need to use that research evidence in their practice or policy decision-making. </p>
<p>Despite the <a href="https://www.education.gov.au/review-research-policy-and-funding-arrangements-0">initial focus on commercial outcomes</a>, the Engagement and Impact Assessment has evolved to include a range of impact types. It provides an important incentive for researchers in all fields to think about how to engage those outside of academia who can translate their research into real-world impacts. It also enables researchers who were already engaging with research end-users and delivering positive impact to have these outcomes formally recognised for the first time at a national level. </p>
<h2>Community input</h2>
<p>Including an engagement component recognises researchers are not in direct control of whether their research will actually be used. Industry, government and the community also have an important role in making sure the potential benefits of research are achieved. </p>
<p>The engagement metrics allow universities to demonstrate and be rewarded for engaging industry, government and others in research, even if it doesn’t directly or immediately lead to impact. Case studies were chosen to demonstrate impact because they let researchers describe the important impacts they are achieving that metrics can’t capture.</p>
<p>The case studies will need to include the impact achieved, the beneficiaries and timeframe of the research impact and countries where the impact occurred. They’ll also include what strategies were employed to enable translation of research into real world benefits.</p>
<p>The results will be assessed by a panel of experts for each field of research who will provide a rating of engagement and impact as low, medium or high.</p>
<h2>Cultural impacts</h2>
<p>The ARC has defined engagement as: </p>
<blockquote>
<p>the interaction between researchers and research end-users outside of academia, for the mutually beneficial transfer of knowledge, technologies, methods or resources. </p>
</blockquote>
<p>Impact has been defined as: </p>
<blockquote>
<p>the contribution that research makes to economy, society and environment and culture beyond the contribution to academic research.</p>
</blockquote>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/when-measuring-research-we-must-remember-that-engagement-and-impact-are-not-the-same-thing-56745">When measuring research, we must remember that 'engagement' and 'impact' are not the same thing</a>
</strong>
</em>
</p>
<hr>
<p>The definition of impact has been amended to include “culture”, which was not part of the definition applied in the pilot. This amendment speaks to concerns raised by the academic community around quantifying and qualifying impacts that vary significantly across different academic fields. It’s hard to compare, for example, the impact of an historic exhibition to the impact of astrophysics research on gravitational waves.</p>
<p>It’s also difficult to compare more basic or experimental research with applied research, such as health and well-being programs that can be directly applied in the community. Basic or experimental research can take a long time to lead to a measurable impact.</p>
<p>Classic examples of experimental research that had significant economic, health and social impacts that it didn’t specifically set out to achieve are the discovery of <a href="https://www.acs.org/content/acs/en/education/whatischemistry/landmarks/flemingpenicillin.html#alexander-fleming-penicillin">penicillin</a>, and <a href="http://www.sbs.com.au/news/explainer/wifi-australian-invention-helping-world-connect">WiFi</a>.</p>
<h2>An addition, not a replacement</h2>
<p>The <a href="http://www.hefce.ac.uk/pubs/rereports/year/2015/metrictide/">traditional research metrics</a> of grants, publication and citation, which work for basic, experimental and longer-time-to-impact research, are still in play. The Engagement and Impact Assessment has <a href="https://www.timeshighereducation.com/news/australian-universities-unimpressed-impact-assessment-plans">not been tied to funding decisions</a> at this stage.</p>
<hr>
<p>
<em>
<strong>
Read more:
<a href="https://theconversation.com/explainer-how-and-why-is-research-assessed-36895">Explainer: how and why is research assessed?</a>
</strong>
</em>
</p>
<hr>
<p>A <a href="https://search-proquest-com.ezp01.library.qut.edu.au/docview/1850750875?pq-origsite=summon">study</a> of the impact case studies submitted to the UK’s <a href="http://www.ref.ac.uk/">Research Excellence Framework</a> found high-impact scores were correlated to high quality scores. They concluded “impact was not being achieved at the expense of research excellence”. <a href="https://bmchealthservres.biomedcentral.com/articles/10.1186/1472-6963-14-2">Previous research</a> has shown research quality is an important enabler of the use of research.</p>
<p>Engagement and impact outcomes for a specific field of research at one university will be assessed against the same field at another university. This is also the case with traditional metrics and grants assessment.</p>
<p>Engagement will be assessed on four key metrics and an engagement narrative. These metrics are focused on funding provided by end-users of research such as businesses or individuals outside the world of academia who directly use or benefit from the the research.</p>
<p>The four metrics are: cash support (against <a href="https://www.education.gov.au/higher-education-research-data-collection">Higher Education Research Data Collection</a> categories) or sponsored grants from end-users, research commercialisation income and how much income is made per researcher.</p>
<p>The engagement narrative will enable universities to provide detail about how they are engaging with end-users. There is also a <a href="http://www.arc.gov.au/sites/default/files/filedepot/Public/EI/Engagement_and_Impact_Assessment_Pilot_2017_Report.pdf">list of other engagement indicators</a> universities can draw on to describe their engagement activity.</p>
<p>At times, the value of research <a href="http://www.dailytelegraph.com.au/news/nsw/taxpayer-dollars-wasted-on-absurd-studies-that-do-nothing-to-advance-australian-research/news-story/c0c20e651da84b3f249f6e77405cfc7c">has been publicly questioned</a>. The Engagement and Impact Assessment will help the general public better understand the value of the research they fund.</p><img src="https://counter.theconversation.com/content/87252/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Pauline Zardo does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Engagement and impact will be part of research performance assessment starting in 2018, signalling a shift in what kind of research we value and why.Pauline Zardo, Data & Policy Research Fellow, Queensland University of TechnologyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/865522017-10-31T13:16:19Z2017-10-31T13:16:19ZSouth Africa can’t afford to see its universities pitch over the precipice<figure><img src="https://images.theconversation.com/files/192447/original/file-20171030-18700-cdgn8j.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">South Africa boasts world class universities. It must not allow their quality to drop.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>For the past two years the actions of <a href="http://chet.org.za/data/sahe-open-data">government</a> and protesting <a href="http://www.sahistory.org.za/article/student-protests-democratic-south-africa">students</a> have slowly started squeezing South Africa’s universities into a shadow of their former selves.</p>
<p>In his book “<a href="http://nb.bookslive.co.za/blog/2017/05/23/as-by-fire-an-urgent-and-necessary-book-on-the-south-african-student-protests-crisis/">As by Fire</a>” prominent educationalist Jonathan Jansen argues that South Africa is witnessing the end of its universities. He explains that this doesn’t mean the doors will close. Registration will not stop. The day to day business of universities will continue. But, he warns, the excellence evidenced by the rankings of South African universities will slowly dip into oblivion.</p>
<p>South Africa is the only country in Africa with ten universities that regularly feature on at <a href="https://www.topuniversities.com/university-rankings/world-university-rankings/2018">least one</a> world <a href="https://www.timeshighereducation.com/world-university-rankings/2017/world-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats">ranking list</a>. These ten are institutions that South Africans can be hugely proud of and whose achievements could serve as models for expanding excellence to other institutions.</p>
<p>The <a href="http://www.uct.ac.za/usr/news/downloads/2016/UniversitiesFundingSouthAfrica_FactSheet.pdf">decline in government funding</a> to South African universities has meant that institutions have had to look elsewhere to cover costs. This has inevitably included <a href="http://www.uct.ac.za/usr/news/downloads/2016/UniversitiesFundingSouthAfrica_FactSheet.pdf">increasing student tuition</a>. In turn, this contributed to student protests in 2015 and 2016. In some instances those protests shut down institutions – suspending their normal functioning for days or weeks at a time. </p>
<p>Shut downs have knock-on effects, some of them long lasting. If universities have to close their doors terms are delayed. Students don’t graduate and don’t pay fees. Universities cannot balance their budgets and infrastructure is not maintained. Staff salaries can’t be paid and academics have to work two or three jobs to survive. </p>
<p>The impact is also felt when it comes to funding. Funding agencies have deadlines and if research outputs are not met grants get cancelled. If grants are cancelled there is less money for equipment. Post graduate student bursaries are cancelled. Post graduate students drop out and go elsewhere and even if new research grants are awarded the students are no longer available to do the research. Then the research outputs cannot be met - again. </p>
<p>Universities elsewhere – in <a href="https://theconversation.com/when-politics-and-academia-collide-quality-suffers-just-ask-nigeria-67313">Nigeria</a>, <a href="https://theconversation.com/kenyas-universities-are-in-the-grip-of-a-quality-crisis-54664">Kenya</a> and, as Jansen <a href="http://www.universityworldnews.com/article.php?story=2017082408304974">himself writes</a>, Zimbabwe and Uganda – stand as a stark warning. South Africa must act to halt the decline and save its universities’ well deserved global reputation of excellence.</p>
<h2>Sustaining universities</h2>
<p>Who cares about universities’ world rankings? Isn’t this just an elitist system in which South Africa cannot afford to compete given its <a href="http://www.statssa.gov.za/?p=9989">declining economy</a>? </p>
<p>No, it’s not. Excellence in academia is a self perpetuating cycle. Break this cycle and universities dive into a spiral of decline. </p>
<p>Excellent students complete their degrees in the minimum time. They drive excellence in an institution’s research programmes. They then become top quality post graduate students who in turn become top class academics and a university’s research machine benefits. These graduates have the ability and the interest needed to engage with a university’s research activities. Because they excel academically, they are often keen to get to grips with more advanced research.</p>
<p>What I’ve found is that getting students involved early on in research often inspires them to study further, equipping them to be future lecturers and professors. Many research programmes – including <a href="https://www.fabinet.up.ac.za/index.php/research-groups/dst-nrf-centre-of-excellence-in-tree-health-biotechnology">my own</a> and that of the faculty in which I work – offer opportunities for undergraduate students to work in their laboratories. In this way students can participate in an institutions’ research activities. </p>
<p>In turn, increased research output <a href="http://www.dhet.gov.za/Policy%20and%20Development%20Support/Research%20Outputs%20policy%20gazette%202015.pdf">benefits universities financially</a>. </p>
<p>Keeping a steady flow of research output will ensure that South Africa can continue to boast some of the world’s top ranked research programmes. The universities of Pretoria, the Witwatersrand and Cape Town are considered <a href="http://www.heraldlive.co.za/news/2017/04/10/three-sa-universities-score-top-marks-world-subject-rankings/">world leaders</a> in mycology, ornithology, anthropology and area studies. The research programmes that earned them these rankings depend on access to top quality postgraduate students. These bright young minds drive world class research – and they come from all over the world.</p>
<p>My own programme has attracted students from Australia, China, Iran, Kenya, Korea, Nigeria, Vietnam and Zimbabwe who are now studying with me. I have in the past also had the privilege of supervising students from Cameroon, Colombia, Chile, Ethiopia, Germany, Lesotho, Namibia, Oman, Switzerland, Uruguay, Venezuela and Zambia. This internationally rich group of students benefits my research and is hugely stimulating to the South Africa students in the programme. </p>
<h2>Preventing brain drain</h2>
<p>The common thread here is engaging students and providing them with the facilities and environment that will keep them in South Africa. Brain drain is <a href="https://businesstech.co.za/news/general/120211/this-map-shows-where-all-south-africas-skilled-workers-are-going/">a reality</a>. The country <a href="https://mg.co.za/article/2016-06-10-00-scarce-skills">needs more</a> doctors to staff its hospitals and engineers to build its power stations. Losing skilled professionals is <a href="https://businesstech.co.za/news/wealth/193764/how-the-rush-to-leave-south-africa-is-starting-to-hurt-business/">bad for the economy</a>.</p>
<p>In addition, university students the world over have changed the direction of business, governments and politics because they are a country’s intellectual leaders. When the strongest of these students choose not to study at universities in their homeland the country is robbed of its next generation of leaders.</p>
<p>Universities must maintain their excellence – or watch their best and brightest minds <a href="https://www.washingtonpost.com/r/2010-2019/WashingtonPost/2016/07/13/Editorial-Opinion/Graphics/KF_Report.pdf">choosing to study</a> and perhaps settle elsewhere.</p>
<p>The role of universities is to educate. They need to produce research and attract brilliant young thinkers who will, ultimately, contribute to a stronger economy and society. South Africa’s universities have long fulfilled these roles. The country cannot afford to see its tertiary education sector pitch over the precipice.</p><img src="https://counter.theconversation.com/content/86552/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brenda Wingfield is a Professor in Genetics at the University of Pretoria
She holds the DST-NRF SARChI chair in Fungal Genomics
She is one of the vice presidents of the Academy of Science of South Africa (ASSAf) </span></em></p>South Africa must act to halt the decline and save its universities’ well deserved global reputation of excellence.Brenda Wingfield, Vice President of the Academy of Science of South Africa and DST-NRF SARChI chair in Fungal Genomics, Professor of Genetics, University of PretoriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/809972017-07-19T17:01:16Z2017-07-19T17:01:16ZHere’s the three-pronged approach we’re using in our own research to tackle the reproducibility issue<figure><img src="https://images.theconversation.com/files/178674/original/file-20170718-31872-1uv1xdv.JPG?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Step one is not being afraid to reexamine a site that's been previously excavated.</span> <span class="attribution"><span class="source">Dominic O'Brien. Gundjeihmi Aboriginal Corporation</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span></figcaption></figure><p>If you keep up with health or science news, you’ve probably been whipsawed between conflicting reports. Just days apart you may hear that “science says” coffee’s good for you, no actually it’s bad for you, actually red wine holds the secret to long life. As <a href="https://www.youtube.com/watch?v=0Rnq1NpHdmw">comedian John Oliver put it</a>:</p>
<blockquote>
<p>“After a certain point, all that ridiculous information can make you wonder: is science bullshit? To which the answer is clearly no. But there is a lot of bullshit currently masquerading as science.”</p>
</blockquote>
<p>A big part of this problem has to do with what’s been called a “<a href="https://theconversation.com/us/topics/reproducibility-5484">reproducibility crisis</a>” in science – many studies if run a second time don’t come up with the same results. <a href="https://doi.org/10.1038/533452a">Scientists are worried</a> about this situation, and <a href="https://www.nature.com/collections/byblhcfwhw">high-profile</a> international <a href="https://doi.org/10.1126/science.aab2374">research journals</a> have raised the alarm, too, calling on researchers to put more effort into ensuring their results can be reproduced, rather than only striving for splashy, one-off outcomes.</p>
<p><a href="https://www.nytimes.com/2016/05/29/opinion/sunday/why-do-so-many-studies-fail-to-replicate.html">Concerns about</a> <a href="https://www.theatlantic.com/science/archive/2016/03/psychologys-replication-crisis-cant-be-wished-away/472272/">irreproducible results</a> <a href="http://www.slate.com/articles/health_and_science/future_tense/2016/04/biomedicine_facing_a_worse_replication_crisis_than_the_one_plaguing_psychology.html">in science resonate</a> <a href="https://fivethirtyeight.com/features/science-isnt-broken/">outside the ivory tower</a>, as well, because a lot of this research translates into information that affects our everyday lives. </p>
<p>For example, it informs what we know about how to stay healthy, how doctors should look after us when we’re sick, how best to educate our children and how to organize our communities. If study results are not reproducible, then we can’t trust them to give good advice on solving our everyday problems – and society-wide challenges. Reproducibility is not just a minor technicality for specialists; it’s a pressing issue that affects the role of modern science in society.</p>
<p>Once we’ve identified that reproducibility is a big problem, the question becomes: How do we tackle it? Part of the answer has to do with changing incentives for researchers. But there are plenty of things we in the research community can do right now in the course of our scientific work.</p>
<p>It might come as a surprise that <a href="https://doi.org/10.1007/s10816-015-9272-9">archaeologists are at the forefront</a> of finding ways to improve the situation. Our <a href="https://doi.org/10.1038/nature22968">recent paper in Nature</a> demonstrates a concrete three-pronged approach to improving the reproducibility of scientific findings.</p>
<h2>Going back to where it all started</h2>
<p>In our new publication we describe recent work at an archaeological site in northern Australia. The results of our excavations and laboratory analyses show that <a href="http://theconversation.com/buried-tools-and-pigments-tell-a-new-history-of-humans-in-australia-for-65-000-years-81021">people arrived in Australia 65,000 years ago</a>, substantially earlier than the previous consensus estimate of 47,000 years ago. <a href="http://theconversation.com/buried-tools-and-pigments-tell-a-new-history-of-humans-in-australia-for-65-000-years-81021">This date has exciting implications</a> for our understandings of human evolution.</p>
<p>A less obvious detail about this study is the care we’ve taken to make our results reproducible. Our reproducibility strategy had three parts: fieldwork, labwork and data analyses.</p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=906&fit=crop&dpr=1 600w, https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=906&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=906&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=1138&fit=crop&dpr=1 754w, https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=1138&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/178680/original/file-20170718-10320-1sapmfd.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=1138&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Ben Marwick and colleagues excavating at Madjedbebe.</span>
<span class="attribution"><span class="source">Dominic O'Brien. Gundjeihmi Aboriginal Corporation</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>Our first step toward reproducibility was our choice of what to investigate. Rather than striking out to someplace new, we reexcavated an archaeological site <a href="https://doi.org/10.1016/j.jhevol.2015.03.014">previously known to have very old artifacts</a>.</p>
<p>The rockshelter site Madjedbebe in Australia’s Northern Territory had been excavated twice before. Famously, excavations there in 1989 indicated that people had <a href="https://doi.org/10.1038/345153a0">arrived in Australia by about 50,000 years ago</a>. But this age was not accepted by many archaeologists, who refused to accept anything older than 47,000 years ago.</p>
<p>This age was controversial from its first publication, and our goal in revisiting the site was to check if it was reliable or not. Could that controversial 50,000-years age be reproduced, or was it just a chance result that didn’t indicate the true time period for human habitation in Australia?</p>
<p>Like many scientists, archaeologists are generally less interested in returning to old discoveries, instead preferring to forge new paths in search of novel results. The problem with this is that it can lead to many unresolved questions, making it difficult to build a solid foundation of knowledge. </p>
<h2>Double-check the lab tests</h2>
<p>The second part of our reproducibility strategy was to verify that our laboratory analyses were reliable.</p>
<p>Our team used <a href="https://www.thoughtco.com/luminescence-dating-cosmic-method-171538">optically stimulated luminescence</a> methods to date the sand grains near the ancient artifacts. This method is complex, and there are only a few places in the world that have the instruments and skills to date these kinds of samples.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=766&fit=crop&dpr=1 600w, https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=766&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=766&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=963&fit=crop&dpr=1 754w, https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=963&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/178820/original/file-20170719-27696-r2h9i8.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=963&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Zenobia Jacobs produced the new ages for the Madjebdebe site based on her work in the Luminescence Dating Laboratory at the University of Wollongong, Australia.</span>
<span class="attribution"><span class="source">University of Wollongong</span>, <a class="license" href="http://creativecommons.org/licenses/by-nd/4.0/">CC BY-ND</a></span>
</figcaption>
</figure>
<p>We first analyzed our samples in our laboratory at the <a href="http://smah.uow.edu.au/sees/facilities/UOW002889.html">University of Wollongong</a> to find their ages. Then we sent blind duplicate samples to another laboratory at the <a href="https://www.adelaide.edu.au/ipas/facilities/luminescence/">University of Adelaide</a> to analyze, without telling that lab our results. With both sets of analyses in hand, we compared them; it turned out in this case that they got the same ages as we did for the same samples.</p>
<p>This kind of verification is not a common practice in archaeology, but because this site was already controversial, we wanted to make sure the ages we obtained were reproducible.</p>
<p>While this extra work involved some additional cost and time, it’s vital to proving that our dates give the true ages of the sediments surrounding the artifacts. This verification shows that our lab results are not due to chance, or the unique conditions of our laboratory. Other archaeologists, and the public, can be more confident in our findings because we’ve taken these extra steps. This external checking should be standard practice in any science where controversial findings are at stake. </p>
<h2>Don’t let the computer be a black box</h2>
<p>After we completed the excavation and lab analyses, we analyzed the data on our computers. This stage of our research was very similar to what scientists in many other fields do. We loaded the raw data into our computers to visualize it with plots and test hypotheses with statistical methods.</p>
<p>However, while many researchers do this work by pointing and clicking using off-the-shelf software, we tried as much as possible to write scripts in the <a href="https://doi.org/10.1038/517109a">R programming language</a>.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=402&fit=crop&dpr=1 600w, https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=402&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=402&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=505&fit=crop&dpr=1 754w, https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=505&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/178686/original/file-20170718-10283-q6g5bg.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=505&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Could be the enemy of reproducibility if it helps obscure the steps in data analysis.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/erinkohlenbergphoto/5353222369">Erin Kohlenberg</a>, <a class="license" href="http://creativecommons.org/licenses/by/4.0/">CC BY</a></span>
</figcaption>
</figure>
<p>Pointing and clicking generally leaves no traces of important decisions made during data analysis. Mouse-driven analyses leave the researcher with a final result, but none of the steps to get that result is saved. This makes it <a href="https://theconversation.com/how-computers-broke-science-and-what-we-can-do-to-fix-it-49938">difficult to retrace the steps</a> of an analysis, and check the assumptions made by the researcher.</p>
<p>On the other hand, our scripts contain a record of all our data analysis steps and decisions. They’re like an exact recipe to generate our results. Other researchers not using scripts for their data analysis don’t have these recipes, so their results are much harder to reproduce. </p>
<p>Another advantage of our choice to use scripts is that we can share them with the scientific community and the public. We follow <a href="https://doi.org/10.1038/nn.4550">standard practices</a> by making our script files and main data files <a href="https://osf.io/qwfcz/">freely available online</a> so anyone can inspect the details of our analysis, or explore new ideas using our data.</p>
<p>It’s easy to understand why many researchers prefer point-and-click over writing scripts for their data analysis. Often that’s what they were taught as students. It’s hard work and time-consuming to learn new analysis tools among the pressures of teaching, applying for grants, doing fieldwork and writing publications. Despite these challenges, there is an accelerating shift away from point-and-click toward scripted analyses in many areas of science.</p>
<h2>Combating irreproducibility one step at a time</h2>
<p>Our recent paper is part of a new movement emerging in many disciplines to improve the reproducibility of science. Examples of recent papers that have made a commitment to reproducibility similar to ours have come from <a href="https://doi.org/10.1038/nature22975">epidemiology</a>, <a href="https://doi.org/10.1038/s41559-017-0160">oceanography</a> and <a href="https://doi.org/10.7554/eLife.20470">neuroscience</a>.</p>
<p>We hope our example will inspire other scientists to be strategic about improving the reproducibility of their research. Some of these steps can be difficult for researchers: It means learning how to use unfamiliar software, and publicly sharing more of their data and methods than they’re accustomed to. But they’re important for generating reliable results – and for maintaining public confidence in scientific knowledge.</p><img src="https://counter.theconversation.com/content/80997/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Ben Marwick receives funding from the Australian Research Council, the University of Wollongong, and the University of Washington. This work was supported in part by the University of Washington eScience Institute.</span></em></p><p class="fine-print"><em><span>Zenobia Jacobs receives funding from the Australian Research Council. </span></em></p>A team of archaeologists strived to improve the reproducibility of their results, influencing their choices in the field, in the lab and during data analysis.Ben Marwick, Associate Professor of Archaeology, University of WashingtonZenobia Jacobs, Professor, University of WollongongLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/768512017-05-30T01:49:32Z2017-05-30T01:49:32ZResearch transparency: 5 questions about open science answered<figure><img src="https://images.theconversation.com/files/171204/original/file-20170526-6389-1eepgnq.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Opening up data and materials helps with research transparency.</span> <span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/book-wisdom-life-read-magic-background-515241850">REDPIXEL.PL via Shutterstock.com</a></span></figcaption></figure><p><strong>What is “open science”?</strong></p>
<p><a href="https://osf.io/preprints/psyarxiv/ak6jr">Open science</a> is a set of practices designed to make scientific processes and results more transparent and accessible to people outside the research team. It includes making complete research materials, data and lab procedures freely available online to anyone. Many scientists are also proponents of <a href="https://sparcopen.org/open-access/">open access</a>, a parallel movement involving making research articles available to read without a subscription or access fee.</p>
<p><strong>Why are researchers interested in open science? What problems does it aim to address?</strong></p>
<p>Recent research finds that many published scientific findings might not be reliable. For example, researchers have reported being able to replicate <a href="https://elife.elifesciences.org/collections/reproducibility-project-cancer-biology">only 40 percent</a> <a href="https://doi.org/10.1038/nrd3439-c1">or less</a> of <a href="http://www.nature.com/nature/journal/v483/n7391/full/483531a.html">cancer biology results</a>, and a large-scale <a href="https://doi.org/10.1126/science.aac4716">attempt to replicate 100 recent psychology studies</a> successfully reproduced fewer than half of the original results.</p>
<p>This has come to be called a “<a href="https://theconversation.com/we-found-only-one-third-of-published-psychology-research-is-reliable-now-what-46596">reproducibility crisis</a>.” It’s pushed many scientists to look for ways to improve their research practices and increase study reliability. Practicing open science is one way to do so. When scientists share their underlying materials and data, other scientists can more easily evaluate and attempt to replicate them.</p>
<p>Also, open science can help speed scientific discovery. When scientists share their materials and data, others can use and analyze them in new ways, potentially leading to new discoveries. Some journals are specifically dedicated to publishing data sets for reuse (<a href="https://www.nature.com/sdata/">Scientific Data</a>; <a href="http://openpsychologydata.metajnl.com/">Journal of Open Psychology Data</a>). <a href="http://doi.org/10.5334/jopd.ac">A paper in the latter</a> has already been cited 17 times in under three years – nearly all these citations represent new discoveries, sometimes on topics unrelated to the original research.</p>
<p><strong>Wait – open science sounds just like the way I learned in school that science works. How can this be new?</strong></p>
<p>Under the status quo, science is shared through a single vehicle: Researchers publish journal articles summarizing their studies’ methods and results. The key word here is summary; to write a clear and succinct article, important details may be omitted. Journal articles are vetted via the peer review process, in which an editor and a few experts assess them for quality before publication. But – perhaps surprisingly – the primary data and materials underlying the article are almost never reviewed. </p>
<p>Historically, this made some sense because journal pages were limited, and storing and sharing materials and data were difficult. But with computers and the internet, it’s much easier to practice open science. It’s now feasible to store large quantities of information on personal computers, and <a href="https://www.nature.com/sdata/policies/repositories">online repositories to share study materials and data</a> are becoming more common. Recently, some journals have even begun to <a href="http://journals.plos.org/plosone/s/data-availability">require</a> or <a href="https://osf.io/tvyxz/wiki/5.%20Adoptions%20and%20Endorsements/">reward</a> <a href="http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002456">open science practices</a> like publicly posting materials and data.</p>
<figure class="align-center zoomable">
<a href="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=397&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=397&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=397&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=499&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=499&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171205/original/file-20170526-6402-1kb6dxp.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=499&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Open science makes sharing data the default.</span>
<span class="attribution"><a class="source" href="https://www.shutterstock.com/image-photo/client-passing-documentation-binders-his-partner-330663044">Bacho via Shutterstock.com</a></span>
</figcaption>
</figure>
<p>There are still some difficulties sharing extremely large data sets and physical materials (such as the specific liquid solutions a chemist might use), and some scientists might have good reasons to keep some information private (for instance, trade secrets or study participants’ personal information). But as time passes, more and more scientists will likely practice open science. And, in turn, science will improve.</p>
<p>Some do view the open science movement as a return to science’s core values. Most researchers over time have <a href="https://doi.org/10.1525/jer.2007.2.4.3">valued transparency</a> as a key ingredient in evaluating the truth of a claim. Now with technology’s help it is much easier to share everything.</p>
<p><strong>Why isn’t open science the default? What incentives work against open science practices?</strong></p>
<p>Two major forces work against adoption of open science practices: habits and reward structures. First, most established researchers have been practicing closed science for years, even decades, and changing these old habits requires some upfront time and effort. <a href="https://osf.io">Technology</a> is helping speed this process of adopting open habits, but behavioral change is hard. </p>
<p>Second, scientists, like other humans, tend to repeat behaviors that are rewarded and avoid those that are punished. Journal editors have tended to favor publishing papers that tell a tidy story with perfectly clear results. This has led researchers to craft their papers to be free from blemish, omitting “failed” studies that don’t clearly support their theories. But real data are often messy, so being fully transparent can open up researchers to critique. </p>
<p>Additionally, some researchers are afraid of being “scooped” – they worry someone will steal their idea and publish first. Or they fear that others will <a href="http://www.nejm.org/doi/full/10.1056/NEJMe1516564">unfairly benefit</a> from using shared data or materials without putting in as much effort. </p>
<p>Taken together, some researchers worry they will be punished for their openness and are skeptical that the perceived increase in workload that comes with adopting open science habits is needed and worthwhile. We believe scientists must continue to <a href="https://osf.io/tvyxz/">develop systems</a> to <a href="http://www.ourdigitalmags.com/publication/?i=365522&article_id=2657445&view=articleBrowser&ver=html5#%7B%22issue_id%22:365522,%22view%22:%22articleBrowser%22,%22article_id%22:%222657445%22%7D">allay fears</a> and reward openness. </p>
<p><strong>I’m not a scientist; why should I care?</strong></p>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=466&fit=crop&dpr=1 600w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=466&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=466&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=585&fit=crop&dpr=1 754w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=585&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/171145/original/file-20170526-6380-6rryx7.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=585&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Open access is the cousin to open science – the idea is that research should be freely available to all, not hidden behind paywalls.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/34070876@N08/3602393341">h_pampel</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>Science benefits everyone. If you’re reading this article now on a computer, or have ever benefited from an antibiotic, or kicked a bad habit following a psychologist’s advice, then you are a consumer of science. Open science (and its cousin, open access) means that anyone – including teachers, policymakers, journalists and other nonscientists – can access and evaluate study information.</p>
<p>Considering automatic enrollment in a 401k at work or whether to have that elective screening procedure at the doctor? Want to ensure your tax dollars are spent on policies and programs that actually work? Access to high-quality research evidence matters to you. Open materials and open data facilitate reuse of scientific products, increasing the value of every tax dollar invested. Improving science’s reliability and speed benefits us all.</p><img src="https://counter.theconversation.com/content/76851/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Elizabeth Gilbert supports the Society for the Improvement of Psychological Science and has published on replication efforts as part of the Open Science Collaboration. Along with Katherine Corker and Barbara Spellman, she has a chapter called "Open Science: What, why, how" forthcoming in the Stevens Handbook of Experimental Psychology and Cognitive Neuroscience.</span></em></p><p class="fine-print"><em><span>Katie Corker is on the executive board for the Society for the Improvement of Psychological Science (improvingpsych.org) and an ambassador for the Center for Open Science (cos.io). She is also an editorial board member for Scientific Data. All of these roles are pro bono.</span></em></p>Partly in response to the so-called ‘reproducibility crisis’ in science, researchers are embracing a set of practices that aim to make the whole endeavor more transparent, more reliable – and better.Elizabeth Gilbert, Postdoctoral Research Fellow in Psychiatry and Behavioral Sciences, Medical University of South CarolinaKatie Corker, Assistant Professor of Psychology, Grand Valley State University Licensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/726692017-02-26T16:59:05Z2017-02-26T16:59:05ZThe peer-review system for academic papers is badly in need of repair<figure><img src="https://images.theconversation.com/files/156762/original/image-20170214-25992-15ckbwa.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The scientific refereeing process can be tedious, time-consuming and isn't very rewarding.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>Peer review, or scientific refereeing, is the basis of the academic process. It’s a rigorous evaluation that aims to ensure only work which advances knowledge is published in a scientific journal. Scientists must be able to trust this system: if they see that something is peer reviewed, it should be a hallmark of quality.</p>
<p>When the editor of scientific journal receives a manuscript, they ask other another scientist – a specialist in their field – to review it. The referee is required to advise the editor whether the manuscript should be published and to give <a href="https://theconversation.com/how-plugging-into-well-connected-colleagues-can-help-research-fly-71223">feedback</a> to the authors.</p>
<p>The system is not flawless. There have been instances of <a href="http://blogs.lse.ac.uk/impactofsocialsciences/2016/12/13/manipulating-the-peer-review-process-why-it-happens-and-how-it-might-be-prevented/">fraud and manipulation</a> due to refereeing, but these are – we hope – isolated cases. </p>
<p>But there are much bigger systemic problems associated with peer review. These are negatively affecting scientific credibility. These include the fact that, globally, it is hard to find referees: reviewing a manuscript requires a lot of time and minimal reward. Very few journals pay referees, and most academics who act as referees are doing so for free in their spare time.</p>
<p>On top of this those who do act as referees often struggle to deliver on time. Worse still, their reports are not always helpful to editors or authors. </p>
<p>Some journals work actively to tackle these issues, but more can be done to ensure that the scientific refereeing system retains its integrity.</p>
<h2>The challenges</h2>
<p>Journal editors are frustrated about the dearth of referees. In an <a href="https://hub.wiley.com/community/exchanges/discover/blog/2015/01/07/recognition-for-peer-review-and-editing-in-australia-and-beyond">open letter</a> to the scientific community, a group of editors wrote that, despite:</p>
<blockquote>
<p>… so much weight [being] given to peer-reviewed publication the essential “backroom” tasks of editing journals and reviewing articles are rarely acknowledged as aspects of academic performance.</p>
</blockquote>
<p>No wonder they’re worried: more than <a href="http://www.informationr.net/ir/14-1/paper391.html">1 million research articles</a> are published globally each year. That requires a lot of referees. But finding appropriate referees is just one part of the bigger task facing editors.</p>
<p>Editors have to get referees to stick to the agreed deadlines. That’s not easy: people tend not to prioritise their review tasks since time spent on their own research is more rewarding.</p>
<p>An experiment conducted with the Journal of Public Economics based in Cambridge in the US found that its referees are late with their reports <a href="http://pubs.aeaweb.org/doi/10.1257/jep.28.3.169">half of the time</a>. There are also instances, across journals, of referees simply never delivering even though they’ve promised to do so.</p>
<p>In some disciplines, these problems have given rise to a serious publication lag – the time between when the manuscript arrives to the actual publication. Over the past 30 years this lag has nearly <a href="http://www.journals.uchicago.edu/doi/10.1086/341868">tripled</a> in economics, from 11 months to just under 30 months. </p>
<p>It not only takes longer to disseminate ideas. The publication lag also worsens the prospects of <a href="http://voxeu.org/article/publication-lags-and-young-economists-research-output">young scientists</a> who need publications to be hired.</p>
<p>Another problem with the existing system is that referee reports do not always adequately inform the editor nor really suggest ways of fundamentally improving the article.</p>
<p>It’s not just authors who complain about this: <a href="http://www.acrwebsite.org/search/view-conference-proceedings.aspx?Id=8104">journal editors</a> do too. One explanation is that referees may follow their own interests, which are not necessarily those of the editor nor the author.</p>
<p>All too often they try to impress editors by making blemishes look like flaws. Economists call this problem “<a href="https://academic.oup.com/rfs/article/28/3/637/1577216/Editorial-Cosmetic-Surgery-in-the-Academic-Review">signal jamming</a>”. At worst it may turn down innovative research.</p>
<h2>Possible changes</h2>
<p>The good news is that journals are aware of these problems, and are committed to tackling them.</p>
<p>Journals should develop and nurture a large base of potential referees, constantly adding new ones and retaining old ones. And these referees need proper recognition. This could involve simply thanking referees publicly, or perhaps awarding prizes for good refereeing.</p>
<p>Journals should also consider paying referees. The estimated value of unpaid referee time is as much as <a href="https://www.timeshighereducation.com/news/unpaid-peer-review-is-worth-19bn/402189.article">£1.9 billion a year</a> – it is clearly a service that requires some financial reward.</p>
<p>Small changes help, too. <a href="http://voxeu.org/article/lessons-experiment-referees-journal-public-economics">Shorter deadlines</a> reduce turnaround time work referees often just submit before the deadline. A public list of referees’ turnaround encourages them to stay on time, too.</p>
<p>Editors should also <a href="http://rfssfs.org/files/2015/01/Joint-Editorial-Advice-for-Authors-2002.pdf">reject</a> <a href="https://academic.oup.com/rfs/article/26/11/2685/1613905/Joint-Editorial">articles</a> that are too sloppy, rather than letting a referee improve them.</p>
<p>Editors should also engage in “<a href="https://academic.oup.com/rfs/article/28/3/637/1577216/Editorial-Cosmetic-Surgery-in-the-Academic-Review">active editing</a>”, instructing the author to ignore referee requests that are merely asking them to fix blemishes.</p>
<p>Editors should also <a href="https://academic.oup.com/rfs/article/25/5/1331/1569914/Reviewing-Less-Progressing-More">pare down</a> the demands on referees, perhaps by asking them to <a href="http://pubs.aeaweb.org/doi/10.1257/jep.31.1.231">separate</a> necessities from suggestions. The guiding principle should be that the work is the author’s – not the referee’s.</p>
<h2>New approaches are being tested</h2>
<p>Journals are already testing new approaches. For instance, some require their editors to <a href="http://revfin.org/new-referee-awards-and-referee-database/">judge the quality</a> of a referee to weed out those people who are simply unhelpful. </p>
<p>Elsevier, a major publisher, has launched a <a href="https://www.reviewerrecognition.elsevier.com/">platform</a> which publicly lists referees and how often they have written referee reports. A similar, independent platform is <a href="https://publons.com/home/">Publons</a>.</p>
<p>“Open peer review” is also growing in popularity. Traditionally, reviewers remain anonymous to guarantee an unbiased opinion. Open peer review goes the opposite way: the referee’s name and report are published together with the article. Everyone can see who the referee was, which is meant to encourage transparency. Not everyone is <a href="http://www.nature.com/nature/peerreview/debate/">convinced</a> about this approach.</p>
<p>Another option is post-publication peer review, in which articles are open for comments all the time from anyone. Sadly, <a href="http://blogs.lse.ac.uk/impactofsocialsciences/2014/11/07/controversy-of-post-publication-peer-review/">internet trolls</a> have tainted this process for many scientists.</p>
<p>It is encouraging that the problems of peer review are being debated and that new approaches are being tested. The peer-review process is very important and its challenges must be taken seriously if academics are to keep publishing quality articles that disseminate new ideas.</p><img src="https://counter.theconversation.com/content/72669/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.</span></em></p>There are major systemic problems associated with peer review that are negatively affecting scientific credibility.Michael E. Rose, PhD Candidate in Economics, University of Cape TownWillem H. Boshoff, Associate Professor of Economics, Stellenbosch UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/662232016-09-29T17:23:42Z2016-09-29T17:23:42ZSouth Africa’s research output will be the biggest victim of student protests<figure><img src="https://images.theconversation.com/files/139759/original/image-20160929-27042-dn2yk.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The costs of student protests are far higher than imagined.</span> <span class="attribution"><span class="source">Rogan Ward/Reuters</span></span></figcaption></figure><p>It will cost <a href="http://businesstech.co.za/news/government/138169/damage-to-sa-universities-hits-r600-million-and-counting/">around R600 million</a> to repair the damage caused by student protests across South Africa. That’s according to the country’s Minister of Higher Education and Training.</p>
<p>I’d suggest that this figure is merely the tip of the iceberg. The true cost of these protests is far higher. This cost can’t be measured in hard currency – yet. The higher education sector is being held to ransom and universities could lose the ability to do their core work: to teach and to conduct research. </p>
<p>This will have dire consequences for the entire country. South Africa is already struggling to produce enough skilled labour to meet demand. If universities cannot complete their academic years, as <a href="http://www.politicsweb.co.za/opinion/uct-stop-feeding-the-crocodile">some fear</a>, some students may miss out on the chance to graduate on time. They may choose to drop out entirely rather than trying to fund another expensive year of study. </p>
<p>Bright academics and postgraduates are likely to seek work or study opportunities elsewhere and major research projects could stumble as higher education’s crisis deepens.</p>
<h2>Damaging the research machine</h2>
<p>I have been an academic for more than 30 years. I have taught students; I still supervise postgraduates and I run a very successful <a href="http://www.fabinet.up.ac.za/">research programme</a>.</p>
<p>I have a deep understanding of the value of education and what it takes to establish a vibrant research culture at a university. I’m also keenly aware of what it takes to do internationally leading research in a developing world environment. I hold a <a href="http://www.nrf.ac.za/division/rcce/instruments/research-chairs">research chair</a> in Fungal Genomics. These chairs are designed to attract and retain research excellence at public universities. My research focuses on understanding tree pathogens, predominantly fungi which cause tree disease. I have trained almost 100 Masters and PhD students and currently supervise 10 post graduate students. </p>
<p>Such postgraduate students are the lifeblood of research programmes. The quality of research done in any country is hugely influenced by the quality of postgraduate students in these programmes. In recent years, more South African students in my research programme have chosen to stay in the country to carry out postgraduate research; they know that the quality of our research is internationally <a href="https://www.timeshighereducation.com/world-university-rankings/best-universities-in-africa-2016">competitive and respected</a>.</p>
<p>These local students are joined by postgraduates from elsewhere in the world. They are also drawn by South Africa’s globally competitive research culture.</p>
<p>International postgraduates are an important asset in South Africa’s bid to produce more scientific PhD holders in the coming years. The country’s department of science and technology has identified a need to <a href="http://www.sagreenfund.org.za/wordpress/wp-content/uploads/2015/04/10-Year-Innovation-Plan.pdf">graduate more PhD students</a>.</p>
<p>The department has set a very ambitious target for universities to graduate 3000 Science and Technology PhD graduates by 2018. South Africa doesn’t have the academics to train this many PhDs. But research intensive universities have been increasing their supervisory capacity by attracting post doctoral students from around the world to help train postgraduates.</p>
<p>Will these post doctoral students and foreign postgraduates still come to South Africa if protests persist? Will local students choose to stay and study towards their PhDs – or will they look for university systems that are not rocked by disruptions?</p>
<h2>The potential for brain drain</h2>
<p>Research is a global activity. Top researchers in South Africa annually host leading researchers from elsewhere in the world. These research leaders interact with academics and graduate students. In this way South African researchers are inspired by the best in the world and will then go on to produce internationally leading research. </p>
<p>But why would these international guests come to campuses racked by protests? As I write this a number of seminars by overseas visitors at my own institution have been postponed and in some cases cancelled. South Africa is poorer for this.</p>
<p>Much of the research I’m referring to here is focused on the country’s own, often unique problems. If this research machine is compromised South Africa will have to “import” – at a significant cost – researchers from other countries to solve its problems.</p>
<p>Local academics, too, are unsettled by what’s happening. Many of my colleagues are very concerned about their futures. Some have told me they are looking actively for positions elsewhere. Young academics who’ve grown up and trained in South Africa could well look for opportunities elsewhere and, given the quality of education they’ve received, they will probably succeed.</p>
<h2>Research programmes do not develop overnight</h2>
<p>Much has <a href="http://www.aau.edu/research/article.aspx?id=15486">been written</a> about the <a href="http://www.dsm.com/corporate/science/science-can-change-the-world.html">value of research</a>. For those who remained unconvinced, it’s useful to think of research as the equivalent of an insurance policy. In doing research you insure that a country and its people are able to understand and deal with future challenges.</p>
<p>A research programme is not something that appears overnight. It takes years to develop, nurture and grow. It often involves the life time endeavour of the researchers concerned. Running a research programme involves a commitment that is essentially 24 hours a day and 365 days of the year. A research programme cannot be switched off for a day, week or a month and then restarted where you left off.</p>
<p>Any breaks mean that you have to restart many activities, often from scratch. This results in delays in delivery and this is very problematic as research is most often done using grant or industrial funding. Granting agencies expect annual reports and that one delivers on what was promised. Industry funding often requires quarterly reporting and funding can be cut if the research outputs are not achieved.</p>
<p>The current student protests are already having a negative impact on research across South Africa. Some universities are suggesting that the academic year will have to be extended into 2017. If campuses are closed and post graduate students and lecturing staff told to go home, the cost to the research machine is incalculable – certainly far more than R600 million.</p><img src="https://counter.theconversation.com/content/66223/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Brenda Wingfield receives funding from industry and government granting agencies to support her research. She is the DST-NRF SARChI research chair in Fungal Genomics.</span></em></p>There is a very real risk that South Africa’s major research projects will stumble and the whole research machine will be shut down by ongoing student protests.Brenda Wingfield, Member of the Academy of Science of South Africa and Professor of Genetics, University of PretoriaLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/656192016-09-21T00:01:46Z2016-09-21T00:01:46ZWhy isn’t science better? Look at career incentives<figure><img src="https://images.theconversation.com/files/138450/original/image-20160920-11131-1alomb3.jpg?ixlib=rb-1.1.0&rect=49%2C65%2C5289%2C3660&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Experiment design affects the quality of the results.</span> <span class="attribution"><a class="source" href="https://www.flickr.com/photos/iaea_imagebank/8147632150">IAEA Seibersdorf Historical Images</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span></figcaption></figure><p>There are often substantial gaps between the idealized and actual versions of those people whose work involves providing a social good. Government officials are supposed to work for their constituents. Journalists are supposed to provide unbiased reporting and penetrating analysis. And scientists are supposed to relentlessly probe the fabric of reality with the most rigorous and skeptical of methods. </p>
<p>All too often, however, what should be just isn’t so. In a number of scientific fields, <a href="https://www.washingtonpost.com/news/speaking-of-science/wp/2015/08/28/no-sciences-reproducibility-problem-is-not-limited-to-psychology/">published findings turn out not to replicate</a>, or to have smaller effects than, what was initially purported. Plenty of science does replicate – meaning the experiments turn out the same way when you repeat them – but the amount that doesn’t is too much for comfort.</p>
<p>Much of science is about identifying relationships between variables. For example, how might certain genes increase the risk of acquiring certain diseases, or how might certain parenting styles influence children’s emotional development? To our disappointment, there are no tests that allow us to perfectly sort true associations from spurious ones. Sometimes we get it wrong, even with the most rigorous methods.</p>
<p>But there are also ways in which scientists increase their chances of getting it wrong. Running studies with small samples, mining data for correlations and forming hypotheses to fit an experiment’s results after the fact are <a href="http://fivethirtyeight.com/features/science-isnt-broken/">just some of the ways</a> to <a href="http://doi.org/10.1038/526182a">increase the number of false discoveries</a>. </p>
<p>It’s not like we don’t know how to do better. Scientists who study scientific methods have known about <a href="http://doi.org/10.1086/288135">feasible remedies for decades</a>. Unfortunately, their advice often falls on deaf ears. Why? Why aren’t scientific methods better than they are? In a word: incentives. But perhaps not in the way you think. </p>
<h2>Incentives for ‘good’ behavior</h2>
<p>In the 1970s, <a href="https://en.wikipedia.org/wiki/Campbell%27s_law">psychologists</a> and <a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">economists</a> began to point out the danger in relying on quantitative measures for social decision-making. For example, when public schools are evaluated by students’ performance on standardized tests, teachers respond by teaching “to the test” – at the expense of broader material more important for critical thinking. In turn, the test serves largely as a measure of how well the school can prepare students for the test.</p>
<p>We can see this principle – often summarized as “when a measure becomes a target, it ceases to be a good measure” – playing out in the realm of research. Science is a competitive enterprise. There are <a href="http://doi.org/10.1038/520144a">far more credentialed scholars and researchers</a> than there are university professorships or comparably prestigious research positions. Once someone acquires a research position, there is additional competition for tenure, grant funding, and support and placement for graduate students. Due to this competition for resources, scientists must be evaluated and compared. How do you tell if someone is a good scientist?</p>
<p>An oft-used metric is the number of publications one has in peer-reviewed journals, as well as the status of those journals (along with related metrics, such as the <a href="https://en.wikipedia.org/wiki/H-index"><em>h</em>-index</a>, which purports to measure the rate at which a researcher’s work is cited by others). Metrics like these make it straightforward to compare researchers whose work may otherwise be quite different. Unfortunately, this also makes these numbers susceptible to exploitation. </p>
<p>If scientists are motivated to publish often and in high-impact journals, we might expect them to actively try to game the system. And certainly, some do – as seen in recent high-profile cases of scientific fraud (including in <a href="https://en.wikipedia.org/wiki/Sch%C3%B6n_scandal">physics</a>, <a href="http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html">social psychology</a> and <a href="http://onlinelibrary.wiley.com/doi/10.1111/bcp.12992/full">clinical pharmacology</a>). If malicious fraud is the prime concern, then perhaps the solution is simply heightened vigilance.</p>
<p>However, most scientists are, I believe, genuinely interested in learning about the world, and honest. The problem with incentives is they can shape cultural norms without any intention on the part of individuals. </p>
<h2>Cultural evolution of scientific practices</h2>
<figure class="align-right zoomable">
<a href="https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=784&fit=crop&dpr=1 600w, https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=784&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=784&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=986&fit=crop&dpr=1 754w, https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=986&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/138454/original/image-20160920-11090-684nc6.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=986&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">Scientists work within a culture of research.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/iaea_imagebank/8199500456">IAEA</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>In a <a href="http://rsos.royalsocietypublishing.org/lookup/doi/10.1098/rsos.160384">recent paper</a>, anthropologist <a href="http://xcelab.net/rm/">Richard McElreath</a> and I considered the incentives in science through the lens of <a href="http://www.oxfordbibliographies.com/view/document/obo-9780199766567/obo-9780199766567-0038.xml">cultural evolution</a>, an emerging field that draws on ideas and models from evolutionary biology, epidemiology, psychology and the social sciences to understand cultural organization and change.</p>
<p>In our analysis, we assumed that methods associated with greater success in academic careers will, all else equal, tend to spread. The spread of more successful methods requires no conscious evaluation of how scientists do or do not “game the system.” </p>
<p>Recall that publications, particularly in high-impact journals, are the currency used to evaluate decisions related to hiring, promotions and funding. Studies that show large and surprising associations tend to be favored for publication in top journals, while small, unsurprising or complicated results are more difficult to publish.</p>
<p>But <a href="http://dx.doi.org/10.1371/journal.pmed.0020124">most hypotheses are probably wrong</a>, and performing rigorous tests of novel hypotheses (as well as coming up with good hypotheses in the first place) takes time and effort. Methods that boost false positives (incorrectly identifying a relationship where none exists) and overestimate effect sizes will, on average, allow their users to publish more often. In other words, when novel results are incentivized, methods that produce them – by whatever means – at the fastest pace will become implicitly or explicitly encouraged.</p>
<p>Over time, those shoddy methods will become associated with success, and they will tend to spread. The argument can extend beyond norms of questionable research practices to norms of misunderstanding, if those misunderstandings lead to success. For example, despite over a century of common usage, the <em>p</em>-value, a standard measure of statistical significance, is still <a href="http://dx.doi.org/10.1080/00031305.2016.1154108">widely misunderstood</a>.</p>
<p>The cultural evolution of shoddy science in response to publication incentives requires no conscious strategizing, cheating or loafing on the part of individual researchers. There will always be researchers committed to rigorous methods and scientific integrity. But as long as institutional incentives reward positive, novel results at the expense of rigor, the rate of bad science, on average, will increase. </p>
<h2>Simulating scientists and their incentives</h2>
<p>There is ample evidence suggesting that publication incentives have been negatively shaping scientific research for decades. The frequency of the words <a href="http://dx.doi.org/10.1136/bmj.h6467">“innovative,” “groundbreaking” and “novel”</a> in biomedical abstracts increased by 2,500 percent or more over the past 40 years. Moreover, researchers often <a href="http://dx.doi.org/10.1126/science.1255484">don’t report when hypotheses fail to generate positive results</a>, lest reporting such failures hinders publication.</p>
<figure class="align-left zoomable">
<a href="https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=1000&fit=clip"><img alt="" src="https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=736&fit=crop&dpr=1 600w, https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=736&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=736&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=925&fit=crop&dpr=1 754w, https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=925&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/138455/original/image-20160920-11127-ntmb9h.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=925&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px"></a>
<figcaption>
<span class="caption">There doesn’t need to be anything nefarious going on for scientists to stick with the suboptimal methods that help them get ahead.</span>
<span class="attribution"><a class="source" href="https://www.flickr.com/photos/iaea_imagebank/8198415199">IAEA</a>, <a class="license" href="http://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a></span>
</figcaption>
</figure>
<p>We reviewed <a href="http://www.statisticsdonewrong.com/power.html">statistical power</a> in the social and behavioral science literature. Statistical power is a quantitative measurement of a research design’s ability to identify a true association when present. The simplest way to increase statistical power is to increase one’s sample size – which also lengthens the time needed to collect data. Beginning in the 1960s, there have been <a href="http://datacolada.org/wp-content/uploads/2013/10/3416-Sedlmeier-Gigerenzer-Psych-Bull-1989-Do-studies-of-statistical-power-have-an-effect-on-the-power-of-studies.pdf">repeated outcries that statistical power is far too low</a>. Nevertheless, we found that statistical power, on average, <a href="http://rsos.royalsocietypublishing.org/lookup/doi/10.1098/rsos.160384">has not increased</a>.</p>
<p>The evidence is suggestive, but it is not conclusive. To more systematically demonstrate the logic of our argument, we built a computer model in which a population of research labs studied hypotheses, only some of which were true, and attempted to publish their results.</p>
<p>As part of our analysis, we assumed that each lab exerted a characteristic level of “effort.” Increasing effort lowered the rate of false positives, and also lengthened the time between results. As in reality, we assumed that novel positive results were easier to publish than negative results. All of our simulated labs were totally honest: they never cheated. However, labs that published more were more likely to have their methods “reproduced” in new labs – just as they would be in reality as students and postdocs leave successful labs where they trained and set up their own labs. We then allowed the population to evolve.</p>
<p>The result: Over time, effort decreased to its minimum value, and the rate of false discoveries skyrocketed. </p>
<p>And replication – while a crucial tool for generating robust scientific theories – isn’t going to be science’s savior. Our simulations indicate that more replication won’t stem the evolution of bad science.</p>
<h2>Taking on the system</h2>
<p>The bottom-line message from all this is that it’s not sufficient to impose high ethical standards (assuming that were possible), nor to make sure all scientists are informed about best practices (though spreading awareness is certainly one of our goals). A culture of bad science can evolve as a result of institutional incentives that prioritize simple quantitative metrics as measures of success. </p>
<p>There are indications that the situation is improving. Journals, organizations, and universities are increasingly emphasizing <a href="http://www.psychologicalscience.org/index.php/replication">replication</a>, <a href="https://royalsociety.org/journals/ethics-policies/data-sharing-mining/">open data</a>, <a href="http://blogs.plos.org/everyone/2015/02/25/positively-negative-new-plos-one-collection-focusing-negative-null-inconclusive-results/">the publication of negative results</a> and more <a href="https://www.idrc.ca/sites/default/files/sp/Documents%20EN/Research-Quality-Plus-A-Holistic-Approach-to-Evaluating-Research.pdf">holistic evaluations</a>. Internet applications such as <a href="https://twitter.com/lakens/status/774953862012755968">Twitter</a> and <a href="https://www.youtube.com/watch?v=WFv2vS8ESkk&list=PLDcUM9US4XdMdZOhJWJJD4mDBMnbTWw_z">YouTube</a> allow education about best practices to propagate widely, along with spreading norms of holism and integrity. </p>
<p>There are also signs that the old ways are far from dead. For example, one regularly hears researchers discussed in terms of how much or where they publish. The good news is that as long as there are smart, interesting people doing science, there will always be some good science. And from where I sit, there is still quite a bit of it.</p><img src="https://counter.theconversation.com/content/65619/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Paul Smaldino does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Embracing more rigorous scientific methods would mean getting science right more often than we currently do. But the way we value and reward scientists makes this a challenge.Paul Smaldino, Assistant Professor of Cognitive and Information Sciences, University of California, MercedLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/652722016-09-19T19:27:48Z2016-09-19T19:27:48ZHow the funding of science research in South Africa can be overhauled<figure><img src="https://images.theconversation.com/files/137565/original/image-20160913-4983-4s3ecl.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">South Africa needs some universities that focus on teaching, and others that concentrate on research.</span> <span class="attribution"><span class="source">Shutterstock</span></span></figcaption></figure><p>South Africa’s universities are bracing themselves for a <a href="http://www.universityworldnews.com/article.php?story=20160813080334155">tough 2017</a>. The country’s National Treasury has warned that there’s simply <a href="http://www.timeslive.co.za/local/2016/08/12/No-money-in-Treasury-budget-for-zero-university-fee-rise">not enough money</a> to make up the shortfall created by a freeze on fees during 2016.</p>
<p>At the same time, the country’s universities are <a href="http://businesstech.co.za/news/general/98423/university-rankings-in-south-africa/">slipping down</a> global ranking tables. Their worsening performance <a href="http://businesstech.co.za/news/general/135643/south-african-universities-struggle-in-global-ranking-as-fee-debate-begins-to-bite/">suggests</a> less investment in research and postgraduate output, factors which heavily influence how rankings are calculated. </p>
<p>And yet research, development, science and technology are all recognised as crucial growth factors - both for the country’s economy and for individual universities. The <a href="http://www.gov.za/sites/www.gov.za/files/Executive%20Summary-NDP%202030%20-%20Our%20future%20-%20make%20it%20work.pdf">National Development Plan</a>, considered a blueprint for the country’s growth until 2030, states:</p>
<blockquote>
<p>Science and technology continue to revolutionise the way goods and services are produced and traded. South Africa needs to sharpen its innovative edge and continue contributing to global scientific and technological advancement. This requires greater investment in research and development, better use of existing resources … </p>
</blockquote>
<p>That “greater investment” hasn’t materialised yet. South Africa, with a population of 52 million, <a href="https://data.oecd.org/rd/gross-domestic-spending-on-r-d.htm">spends 0.73%</a> of its Gross Domestic Product on research and development. Australia, home to 24 million people, <a href="https://data.oecd.org/rd/gross-domestic-spending-on-r-d.htm">spends 2.1%</a>. South Korea, home to 50 million people, <a href="https://data.oecd.org/rd/gross-domestic-spending-on-r-d.htm">spends 4.3%</a>. These two nations’ investments have paid dividends: they are considered world leaders in the fields of science, technology, engineering and maths.</p>
<p>It’s time for South Africa to put its money where its mouth is. I propose a total overhaul of how science funding is allocated. This should be done on the premise that not all universities should be focusing on research and development. Some should be funded only as teaching institutions; others with proven track records should concentrate on research and scientific output. This will save billions that can be redirected to improve the quality of science teaching and the country’s research output more broadly.</p>
<h2>A new structure is needed</h2>
<p>There are 26 universities in South Africa. All of these teach the “hard sciences” – such as Chemistry, Physics and Mathematics – up to the 4th year Honours degree. They receive <a href="http://www.dhet.gov.za/Financial%20and%20Physical%20Planning/Ministerial%20Statement%20on%20University%20Funding%202016-2017%20and%202017-2018,%20November%202015.pdf">funds</a> towards this work from the Department of Higher Education and Training. </p>
<p>Beyond Honours, at the levels of masters and doctoral studies, the focus switches sharply to research. Research enterprises in the sciences are far more expensive to run than teaching programmes. For research you need laboratories, instruments, increased access to expensive online journals and more.</p>
<p>But more than half of the country’s 26 universities are simply not producing enough good quality research. The QS World rankings for <a href="http://www.topuniversities.com/qs-world-university-rankings">2016/17</a> feature only nine South Africa universities. These tend to be institutions that were well resourced during the apartheid era. Their previously disadvantaged counterparts – which largely catered for black students – have less research infrastructure and so struggle more to attract top researchers. This affects their performance when it comes to output.</p>
<p>Perhaps it is time to rethink how academic research is structured in the costly sciences. Masters and doctoral research students are serious about their work. They want to publish in top journals. They want to perform research at well-equipped laboratories. They want to work with the best professors in the field, at universities with a solid research reputation.</p>
<p>Research students know it is the combined quality of these factors that determines the next step in their careers. I’d argue that it’s necessary to focus and consolidate science research endeavours across the country at institutions with a proven track record of research output. And it’s time to stop giving research-linked funding to institutions that don’t perform.</p>
<h2>Savings put to good use</h2>
<p>Given South Africa’s history, this suggestion might seem controversial. It implies that formerly black and disadvantaged universities won’t ever be able to become proper research institutions and ought to be used solely for teaching. Some would argue that this perpetuates the inequalities left by apartheid. I can accept this. But the reality is that South Africa cannot become a world leader in the sciences using the current system.</p>
<p>And the money that is saved by not unnecessarily funding research at some institutions can be ploughed back into the country more broadly. There are three areas where these savings could be used:</p>
<ol>
<li><p>Funding worthy students from all socio-economic backgrounds to attend top research institutions;</p></li>
<li><p>Bolstering the activities that are already underway at research-active universities. South Africa has a <a href="http://www.thesouthafrican.com/the-top-10-south-african-inventions/">proud history</a> of scientific discovery and innovation. In recent times, paleontologists have discovered a new <a href="http://www.bbc.com/news/science-environment-34192447">human-like species</a>; the country will soon host the largest radio telescope in the world, the Square Kilometre Array <a href="https://www.ska.ac.za/">(SKA)</a>. There’s also great work being done towards vaccines and <a href="http://ewn.co.za/2016/07/27/UCT-academics-in-malaria-treatment-breakthrough">disease cures</a>.</p></li>
<li><p>Launching more desperately needed science, technology, engineering and maths teacher training colleges. South Africa simply doesn’t have enough science and <a href="http://www.bdlive.co.za/national/education/2015/07/16/one-in-four-south-african-schools-do-not-offer-maths-in-matric-curriculum">maths teachers</a> in its schools at the moment. These colleges could be based at teaching universities that have basic infrastructure.</p></li>
</ol>
<p>This approach is not without precedent. </p>
<h2>International examples</h2>
<p>Consider Australia’s <a href="https://www.go8.edu.au/">Group of Eight</a> (or Go8) university model. Australia has 43 universities and, until 1999, the government funded all these institutions’ research more or less equally. Then the formula was changed and the Go8 was born.</p>
<p>This is a coalition of eight research-intensive universities, all of which are consistently ranked in the world’s top 200 institutions. The Go8 receive about 75% of Australian competitive grant funding. They spend some $AU 6 billion (about R64.2 billion) on research annually and award 53% of all doctorates in the country.</p>
<p>In the US, research universities have emerged in the years after World War II as a global role model. Having studied there, I know that almost all these institutions’ students earn their undergraduate degrees elsewhere, then relocate to research-intensive spaces for their postgraduate work. It is also well established that those looking for academic careers had better earn their doctorates at <a href="http://www.slate.com/articles/life/education/2015/02/university_hiring_if_you_didn_t_get_your_ph_d_at_an_elite_university_good.html">top research universities</a>. </p>
<p>Yet in South Africa it is quite common to get one’s undergraduate and postgraduate degrees at the same institution. What is so wrong with pursuing your undergraduate degree at a university that’s geared for great teaching, then relocating to a research institution for postgraduate study? </p>
<h2>Getting serious</h2>
<p>South Africa needs to prove that it’s serious about investing in research and development to benefit all its citizens. To do so, it must consolidate and focus research quality and expenditure in the right places. It must use its limited resources as carefully as possible. This means scrapping financially draining, unproductive postgraduate degrees and research activities at many universities. This will boost the whole nation in the long term.</p><img src="https://counter.theconversation.com/content/65272/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Werner van Zyl receives funding from the National Research Foundation (NRF), and the ESKOM Tertiary Education Support Programme (TESP). </span></em></p>South Africa must examine how science funding is allocated to universities. It also needs to acknowledge that not all universities should be focusing on research and development.Werner van Zyl, Associate Professor of Chemistry, University of KwaZulu-NatalLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/632372016-08-01T11:45:44Z2016-08-01T11:45:44ZStress put on academics by the REF recognised in Stern review<p>All researchers at a university should be considered when the quality of an institution’s research is assessed, according to a <a href="https://www.gov.uk/government/publications/research-excellence-framework-review">new independent review</a> of the process by Lord Nicholas Stern, president of the British Academy. </p>
<p>The idea of a “universal submission” of researchers is one of Stern’s key recommendations for changes to the <a href="https://theconversation.com/qanda-what-is-the-ref-and-how-is-the-quality-of-university-research-measured-35529">Research Excellence Framework (REF)</a> – a process which influences the amount of money universities receive from the government for research. The last REF took <a href="http://www.ref.ac.uk/">place in 2014</a> and the next round of assessment is due in 2020-21. </p>
<p>Some institutions were hyper-selective in the choice of those researchers they submitted to the exercise in 2014. This meant that claims of “institutional” research excellence made by universities were only ever partial. The recommendation that all researchers should be assessed signals an effort by Stern to correct and dispel the kinds of <a href="https://theconversation.com/game-playing-of-the-ref-makes-it-an-incomplete-census-35707">“gaming” of the REF</a> perpetuated by universities in 2014. </p>
<p>Criticisms of institutional selectivity in 2014 are <a href="http://link.springer.com/article/10.1007/s11024-016-9298-5">unsurprisingly abundant</a>. Any claim of institutional excellence when “excellence” only refers to a top slice and neglects to factor in the contributions of others is not an entirely faithful reflection of collective achievement.</p>
<p>Stern’s recommendation is not only aimed at preventing the future telling of half-truths that distort and exaggerate the reality of the UK’s research landscape. It could also help to counteract the devastating impact on morale, self-worth and trust caused by an institutional policy of cherry-picking “the best” researchers. </p>
<figure class="align-center ">
<img alt="" src="https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&fit=clip" srcset="https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=400&fit=crop&dpr=1 600w, https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=400&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=400&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=503&fit=crop&dpr=1 754w, https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=503&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/132611/original/image-20160801-28357-bdhlu5.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=503&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">No more game playing.</span>
<span class="attribution"><span class="source">Africa studio/www.shutterstock.com</span></span>
</figcaption>
</figure>
<h2>Unintended consequences</h2>
<p>But while this recommendation might hopefully culminate in the eradication of one form of selective behaviour, it might produce another. One imaginary scenario would be that of universities redrawing researchers’ employment contracts.</p>
<p>In simple terms, this might translate into those researchers deemed unlikely to be able to contribute to a competitive REF submission having their contractual time allocated for research retracted by their institutions. Enforced movement from teaching and research to “teaching-only” contracts would shift the goal-posts of whether an academic is eligible to be submitted to the REF. This would mean that institutions would not be obliged to include them in the REF – so that the competitiveness of the university’s submission would not necessarily be compromised. </p>
<p>It’s possible that some researchers could be deemed “less than eligible” for REF submission based on a number of factors, such as the stage of their career or their domestic circumstance. The implicit danger of universal submission is that these researchers will suffer worse marginalisation and forms of professional grievance than might have occurred in 2014. This raises a more profound problem of how the very process of the REF could end up diminishing what universities recognise as the role and contribution of the researcher: primarily, the successful procurement of research funds and prominence. </p>
<h2>A broader role for impact</h2>
<p>Stern also makes some recommendations about how assessment of research excellence includes an understanding of the wider public “impact” it has. Impact contributed to <a href="https://theconversation.com/the-impact-of-impact-on-the-ref-35636">20% of the way the REF was assessed</a> in 2014 and Stern advises that it should “not comprise less than 20% in the next exercise” – leaving the door open for the share to be increased. He also suggests that impact on public engagement and understanding might play a greater role. This is particularly pertinent given the <a href="http://www.tandfonline.com/doi/full/10.1080/13583883.2011.641578">conundrum academics face</a> when thinking about their public engagement work as either a conduit to future impact, or a form of impact in its own right.</p>
<p>Stern also recommends that the impact of research on teaching is recognised – an area which was murky at best in 2014. This appears designed to resolve a binary division which tends to segregate teaching and research cultures and which might be further exaggerated by the impending <a href="https://theconversation.com/uk/topics/teaching-excellence-framework">Teaching Excellence Framework</a>. Stern also suggests that “institutional” impact case studies could be used to promote a greater culture of interdisciplinary research. </p>
<h2>Moving around</h2>
<p>One other major conspicuous recommendation is that research outputs, like research impacts, ought not to be portable – meaning that academics can only count research undertaken and published while they were at their current institution as part of the REF exercise. This aims to fix the problem of poaching and “rent-seeking” that goes on in the run-up to REF submission – this is disruptive and detrimental to what is ultimately a relatively small and highly interconnected professional community.</p>
<p>But some will identify this as being especially problematic to career access and mobility and a workforce that is typically migratory. For example, a researcher who has to move institution just before a REF exercise could possibly render their previous four or five years of research outputs obsolete. </p>
<p>At first glance, the Stern review offers UK universities a series of recommendations that seek to counter the kinds of deleterious effects on academics’ professional well-being and health caused by institutional game-playing of the REF. Here is an ambition for a broader, fairer and more equitable system of evaluation that all can “compete” within. </p>
<p>The Stern review should not, however, be mistaken for a ready-made solution to the excesses created by higher education’s new marketplace. The competition fetish which the REF embodies is seen by many to contaminate academic identity, agency, practice and community. </p>
<p>Ultimately, the Stern review makes some necessary and important admissions of the manifold risks to the welfare of the research community – where the pursuit of excellence and public money trumps all other concerns. It hopefully might also shunt the spotlight away from individual researchers themselves and onto the organisational practice of their universities.</p><img src="https://counter.theconversation.com/content/63237/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Richard Watermeyer does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.</span></em></p>Recommendations in a new report aim to stamp out game-playing when it comes to the Research Excellence Framework.Richard Watermeyer, Senior Lecturer in Education, University of BathLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/567452016-04-10T20:02:33Z2016-04-10T20:02:33ZWhen measuring research, we must remember that ‘engagement’ and ‘impact’ are not the same thing<figure><img src="https://images.theconversation.com/files/117931/original/image-20160408-23914-15ysns8.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">What is the purpose of measuring engagement, impact or quality?</span> <span class="attribution"><span class="source">from www.shutterstock.com</span></span></figcaption></figure><p>In the <a href="https://theconversation.com/turnbull-seeks-ideas-boom-with-innovation-agenda-experts-react-51892">Innovation Statement</a> late last year, the federal government indicated a strong belief that<a href="https://theconversation.com/ten-rules-for-successful-research-collaboration-53826"> more collaboration</a> should occur between industry and university researchers. </p>
<p>At the same time, <a href="https://docs.education.gov.au/system/files/doc/other/20151203_main_report1.pdf">government</a>, <a href="https://www.go8.edu.au/programs-and-fellowships/excellence-innovation-australia-eia-trial">education</a> and industry groupings have made numerous recommendations for the “impact” of university research to be assessed alongside or in addition to the existing assessment of the quality of research. </p>
<h2>How should we measure research?</h2>
<p>But what should we measure and, more importantly, why should we measure it?</p>
<p>In accounting, we stress that the measurement basis of something inevitably reflects the purpose for which that measure is to be used. </p>
<p>So what is the purpose of measuring engagement, <a href="https://theconversation.com/there-is-no-easy-way-to-measure-the-impact-of-university-research-on-society-50856">impact</a> or, for that matter, quality? </p>
<p>The primary reason for measuring quality seems fairly self-evident – as a major stakeholder in terms of funding (especially dedicated research-only funding), the government wants an assessment of just “how good” by academic standards such research really is. </p>
<p>Looking ahead, measures of quality such as the Excellence in Research for Australia (ERA) rankings have been speculated to potentially influence future funding via prestigious competitive schemes (such as the Australian Research Council), block funding for infrastructure and the availability of government support for doctoral students via Australian Postgraduate Awards. </p>
<p>So the demand for a measure of research quality and the potential uses of such a measure are pretty clear.</p>
<p>But what valid reasons are there for investing significant resources in the measurement of research impact or engagement? </p>
<p>If high-quality research addresses important practical problems (large or small), surely we would expect impact would follow? </p>
<p>In this sense, the extent of impact is really a joint product of the quality (or robustness) of research and the choice of topic (ie, practical versus more esoteric).</p>
<h2>Research impact needs time</h2>
<p>But over what period should impact be measured? </p>
<p><a href="https://www.go8.edu.au/programs-and-fellowships/excellence-innovation-australia-eia-trial">Recent exercises</a> such as that conducted by the Australian Technology Network and Group of Eight have a relatively short-term focus, as would any “impact assessment” tied to the corresponding period covered by the existing ERA time frame (say the last six years). </p>
<p>I and many others maintain that impact can only be assessed over much longer periods, and that in many cases short-term impact is potentially misleading. </p>
<p>How often have supposedly impactful results subsequently been rejected or overturned? </p>
<p>Such examples inevitably turn out to reflect low quality (and in some cases outright fraudulent) research.</p>
<h2>Ranking impact</h2>
<p>Finally, how can impact be ranked? Is there a viable measure that can distinguish between high and low impact? Existing case-study approaches are unlikely to yield any form of quantifiable measurement of research impact.</p>
<p>Equally puzzling is the call to measure research engagement. What is the purpose of such an exercise? Surely in a financially constrained research environment, universities readily recognise the importance of such engagement and pursue it constantly. </p>
<p>We don’t need a national assessment of engagement to encourage universities to engage. </p>
<p>Motive aside, one approach canvassed is the quantum of non-government investment in research (ie, non-government research income). </p>
<p>This is arguably one rather limited way to measure engagement, and is focused on input rather than output. If the purpose of any measurement is to capture outcomes, does it make sense to focus exclusively on inputs? The logic of this escapes me.</p>
<h2>Engagement and impact are not the same thing</h2>
<p>Even more worryingly, some use the terms engagement and impact interchangeably. </p>
<p>They would have us believe that a simple (but useful) measure of impact is the extent to which university researchers receive industry funding. Surely this is, at best, a measure of engagement, not impact.</p>
<p>Although the two are likely correlated, the extent will vary greatly across discipline areas. </p>
<p>Further, in business disciplines, much of the “knowledge transfer” that occurs via education (including areas such as executive programs) reflects the impact of the constant process of researching better business practices across areas such as accounting, finance, economics, marketing and so on.</p>
<p>Discretionary expenditure on such programs by business is surely an indication of the extent to which business schools and industry are engaged, yet this would be ignored if we focused on research income alone.</p>
<p>We must not lose sight that quality (ie, rigour and innovativeness) is a necessary but not sufficient condition for broader research impact.</p>
<p>Engagement is not impact, and simple measures such as non-government research income tell us very little about genuine external engagement between universities and industry.</p>
<p>As accountants know, performance measurement reflects its purpose. What we need before any further national assessment of attributes such as impact or engagement is clear understanding of the purpose of such an exercise. </p>
<p>Only when the purpose is clearly specified can we have a sensible debate about measurement principles.</p><img src="https://counter.theconversation.com/content/56745/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Stephen Taylor is affiliated with the Australian Business Dean's Council as the 2016 ABDC Research Scholar </span></em></p>Engagement is not impact, and simple measures such as non-government research income tell us very little about genuine external engagement between universities and industry.Stephen Taylor, Professor of Accounting, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/527462016-01-05T19:17:31Z2016-01-05T19:17:31ZCrisis facing Indian higher education – and how Australian universities can help<figure><img src="https://images.theconversation.com/files/107235/original/image-20160105-28994-txb4wi.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Over the next 10 years, there is forecast to be 14 million more students studying in India.</span> <span class="attribution"><span class="source">Flickr/reinholdbehringer</span></span></figcaption></figure><p>India is facing a crisis around access to good college and university education. </p>
<p>Although there has been an enormous expansion in higher education in India over the past 30 years – the proportion of those attending college or university has increased from 6% in 1983 to 18% in 2014 – there is still a huge problem around quality. </p>
<p>The Indian government is aware of this situation and in 2013 launched a new higher education <a href="http://www.rand.org/pubs/research_reports/RR225.html">improvement programme</a>.</p>
<p>Currently 18% of Indian youth aged 18-21 are enrolled in university. The government aims to increase this to 30% by 2020, which will mean increasing the number of university places from the current 26 million to roughly 40 million. </p>
<p>With India invested hugely in to reforming its higher education sector, it’s now time for Australia to view this as an opportunity to capitalise on this growth and look for ways to collaborate with and support Indian institutions.</p>
<p>Here are a few ways Australian universities can help:</p>
<h2>Train staff</h2>
<p>Between 1983 and 2013, the number of engineering colleges in India grew by 20% each year. This meant the number of trained academic staff needed to increase by 30 times over the same period in order to cater for the rise in the number of students studying engineering. </p>
<p>But in practice, academic staff numbers only increased by twofold. Private college management committees responded to this situation by <a href="http://www.sup.org/books/title/?id=17650">hiring teachers who lacked the required qualifications</a> and professional experience. </p>
<p>There is a major opportunity here for Australian universities to partner with Indian institutions in the training of those in the fields of engineering and science. </p>
<p>This could help boost enrolments to Australian universities through the development of joint courses across Australia-India institutions, and also increase the profile of Australia among Indian young people.</p>
<p>With numbers of Indian students choosing to study in Australia continuing to fall due to a combination of the visa crackdown, <a href="http://www.abc.net.au/news/2015-08-25/australian-dollar-fall-is-good-news-for-international-students/6723438">high Australian dollar</a>, and <a href="http://www.heraldsun.com.au/news/law-order/indian-student-numbers-plunge-after-fresh-attack/story-fni0fee2-1226795039267">safety concerns</a>, forming partnerships could help rebuild links with India.</p>
<h2>Root out corruption</h2>
<p>Although the Indian government decrees that universities must be not for profit, corruption is rife in the sector, with private colleges often asking students to pay cash donations before they start their course. </p>
<p>Other cases of corruption include: higher castes (a division of society based upon differences of wealth, rank, or occupation) seizing scholarships meant for low castes, hiring irregularities to academic positions, establishment of bogus colleges, cheating in examinations, and the use of colleges to launder money. </p>
<p>In one example, a private education entrepreneur set up an online college called “zap”, which – after collecting donations from students – disappeared without trace.</p>
<p>In the period between the early 1980s and late 2000s, the market was unable to solve the problem of bogus and poor quality private education because demand so massively outstripped supply. </p>
<p>By around 2010 this situation had been reversed in some areas, leaving institutes forced to compete.</p>
<p>There are opportunities here for Australian institutions to partner with the Indian government in reform, to engage in talks about introducing compulsory accreditation, assessment, and accountability processes. This would feed into the wider effort to export financial services from Australia to India. </p>
<h2>Share knowledge on access to higher education</h2>
<p>Although India has managed to increase the number of women entering higher education – the ratio of men to women in higher education moved from 8:1 in 1950 to around 1:1 in 2014 – other issues such as class divide still persist.</p>
<p>Only the very best performing poor students can obtain a good education via scholarships to elite institutions.</p>
<p>The vast majority attend the poorer quality, and therefore cheaper, private colleges or public-sector institutions. </p>
<p>Easy access to student loans might address this situation. Currently less than 3% of students in India take loans, reflecting the difficulty of acquiring finance for courses that do not have transparent, set fees. </p>
<p>Australia has a great deal of experience in this area and there is considerable potential for the Australian government to advise India on suitable methods of enhancing public access to higher education. Solutions could include allowing universities and colleges to set their own fees in a transparent manner while also creating a loans or graduate taxation scheme.</p>
<h2>Research partnerships</h2>
<p>There are only seven Indian institutions in the top 400 in the 2015 <a href="http://www.topuniversities.com/university-rankings/world-university-rankings/2014#sorting=rank+region=+country=+faculty=+stars=false+search=">QS World Rankings</a> and none in the top 100. India arguably does not contain a single world-class university. </p>
<p>Here again there are opportunities for Australia to partner with India. For example, Monash University already has a longstanding tie up with the Indian Institute of Technology (IIT) Bombay; the University of Melbourne has links with ITT Kanpur, and many others are stepping into this space.</p>
<p>Using such links to encourage social science and arts teaching would be an innovative and very helpful step.</p>
<p>There are also excellent initiatives being led by Deakin University in the area of skill provision in India. Just two months ago, Deakin University <a href="https://www.deakin.edu.au/research/story?story_id=2012/02/27/research-partnership-to-build-skills">announced a strategic research partnership</a> in India with Bharat Forge Ltd – here researchers will collaborate to conduct research relevant to the manufacture of car parts.</p>
<p>Australia/India collaboration in the higher education sector is an area of enormous opportunity. Now is the time to act.</p><img src="https://counter.theconversation.com/content/52746/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Craig Jeffrey receives funding from the Economic and Social Research Council for research on youth in South Asia.</span></em></p>India has invested hugely in to reforming its higher education sector – Australia must view this as an opportunity to capitalise on this growth through partnerships and training schemes.Craig Jeffrey, Director and CEO of the Australia India Institute; professor of development geography, The University of MelbourneLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/521522015-12-14T19:26:24Z2015-12-14T19:26:24ZWill the impact framework fix the problems the research audit found?<figure><img src="https://images.theconversation.com/files/105367/original/image-20151211-8291-8gyfh3.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">How useful is ERA for measuring research quality?</span> <span class="attribution"><span class="source">from www.shutterstock.com</span></span></figcaption></figure><p>The results from the latest university research audit indicate that <a href="https://theconversation.com/are-australian-universities-getting-better-at-research-or-at-gaming-the-system-51895">research in Australia is improving</a>.</p>
<p>This suggests that the Excellence in Research for Australia (ERA) exercise is working: ERA has achieved its main aim of boosting the quality of Australian research.</p>
<p>However, this headline statement masks a plethora of concerns.</p>
<p>Under the <a href="https://theconversation.com/turnbull-seeks-ideas-boom-with-innovation-agenda-experts-react-51892">government’s latest reform</a> of research funding, academics will be assessed not only on their quality of research through the ERA, but also on the economic, social and environmental impacts of their research through <a href="https://theconversation.com/watt-report-suggests-financial-incentives-for-measuring-research-impact-51815">a new impact framework</a></p>
<p>The impact and engagement measures herald a new era that rewards researchers for collaborating beyond their institutions.</p>
<p>It is timely, then, to reassess ERA’s utility. Is it fit for purpose? Will these two assessment systems complement or contradict one another?</p>
<h2>What has gone well in ERA?</h2>
<p>The ERA processes have recognised peer review alongside metrics. </p>
<p>Research efforts at universities are arguably now more focused towards areas of strength. There is a clearer (though contested and arguably narrower) understanding of scholarly research, particularly that which is non-traditional. </p>
<p>On paper, ERA has established a system whereby research can be compared nationally and against international benchmarks.</p>
<h2>What isn’t working?</h2>
<p>Individual researchers are not assessed by ERA per se. However, they are assessed in line with ERA at the institutional level — in a system that awards a single score for an entire discipline cohort.</p>
<p>Inter-disciplinary research has been disadvantaged. ERA’s 1,238 fields of research (FoR) codes make it problematic for researchers to publish outside their discipline or academic unit. </p>
<p>Publishing, performing or exhibiting internationally is perceived to be more prestigious than in Australia. This unjustified exoticism diminishes the importance of Australian research and puts local and Australian publication outlets at risk. </p>
<p>A lack of transparency and accountability remains a critical problem. </p>
<p>The process by which final rankings are calculated remains opaque. It is unclear how the peer review of evaluation units is moderated and benchmarked globally. The rationale for inclusion, exclusion and change in the list of journals recognised by ERA has not been made public. </p>
<p>Whole disciplines ranked “below world average” are reliant on empirical research to fathom what went wrong. There is no feedback other than the score.</p>
<p>Esteem measures are narrow. The category “prestigious work of reference”, for example, is strikingly limited. It has never been opened to public discussion. Why have some publications been chosen and others omitted?</p>
<p>The ERA journal rankings were <a href="https://theconversation.com/why-the-era-had-to-change-and-what-we-should-do-next-1874">abolished in 2011</a>. However, their ghost influences decisions from journal selection to academic recruitment and promotion. </p>
<p>Universities still reward publication in high-ranking journals from the list; some institutions recognise only research published in A or A<sup>*</sup> journals, or those marked “quality” in the current list. </p>
<p><a href="http://link.springer.com/chapter/10.1007/978-94-007-7085-0_22">As predicted</a>, the editorial boards of these journals are struggling to cope with the influx of submissions. Lower-ranked journals and those with lower impact factors are struggling to survive. Many <a href="http://www.australianhumanitiesreview.org/archive/Issue-May-2009/genoni&haddow.htm">Australian journals</a> are disadvantaged by the bias towards international journals. </p>
<p>The audit culture most affects early career academics. They and others <a href="http://files.eric.ed.gov/fulltext/EJ926450.pdf">struggle</a> to negotiate the system, juggle heavy <a href="http://www-tandfonline-com.dbgw.lis.curtin.edu.au/doi/pdf/10.1080/07294360.2013.864616">teaching loads</a> and manage the <a href="https://minerva-access.unimelb.edu.au/bitstream/handle/11343/28917/264644_GoedegebuureAustraliasCasual.pdf?sequence=1">precarity</a> of casual academic employment. </p>
<p>The <a href="http://www.lhmartininstitute.edu.au/userfiles/files/research/attractiveness_ac_prof_res_brief.pdf">international mobility of Australian academics</a> is high and early career academics are the most likely to <a href="http://www.cshe.unimelb.edu.au/people/bexley_docs/The_Academic_Profession_in_Transition_Sept2011.pdf">move overseas or leave higher education</a>.</p>
<p>The loss of young academics from an ageing academic workforce risks Australia’s ability to meet future demand. Moreover, it impairs capacity for innovation.</p>
<h2>What are the concerns?</h2>
<p>Measuring engagement according to <a href="https://theconversation.com/australias-innovation-agenda-embracing-risk-or-gambling-with-public-health-52003">research income from industry</a> is concerning. </p>
<p>How, for example, will <a href="https://theconversation.com/in-universities-obsessed-with-research-heres-what-falls-between-the-cracks-938">collaborative research with not-for-profits</a> and innovative start-up companies be measured? How will the new measures account for these organisations’ exemptions from a cash contribution for Australian Research Council Linkage proposals? </p>
<p>There is a contradiction between a new impact measure that encourages a culture of risk-taking and ERA, which promotes risk-avoidance behaviours and impacts upon <a href="http://www.victoria.ac.nz/law/research/publications/vuwlr/prev-issues/volume-44,-issue-34/07-Butler.pdf">academic freedom</a> by directing research behaviour. This is particularly problematic for new researchers, blue-sky research and research with benefits that emerge only in the long term. </p>
<p>Both systems place professional service outside academic workloads. This raises new questions. Who will edit the journals, convene the conferences, become officers of professional associations, or write the handbooks and textbooks?</p>
<p>These activities are essential to the health of all disciplines. Increasingly, they are unrecognised and unrewarded. This has long-term ramifications for both research quality and impact. </p>
<p>Neither system recognises investments in partner communities that are critical to social licence to operate in many disciplines. </p>
<h2>Improving ERA</h2>
<p>Has ERA run its course? Perhaps. It certainly needs improvement. </p>
<p>The ERA process should be subject to external review. We need greater transparency about the criteria that inform assessment categories. We need discussion of categories not yet opened to consultation. </p>
<p>Given concerns <a href="https://theconversation.com/are-australian-universities-getting-better-at-research-or-at-gaming-the-system-51895">over gaming the system</a>, we need an audit of data that has been excluded from ERA submissions. There should be a review of disciplinary membership of the committees in terms of institutional representation through time.</p>
<p>We need ERA to cease peer reviews of outputs already subject to double-blind peer review.</p>
<p>There is a dire need to review the real cost of each ERA exercise, which runs approximately every three years. We need to consider whether the costs of assessing research excellence <a href="http://rev.oxfordjournals.org/content/20/3/247.short">exceed the benefits</a>. </p>
<p>While the ARC’s <a href="http://www.arc.gov.au/sites/default/files/filedepot/Public/ARC/Budget/PDF/2015_16%20ARC_Budget_Statements.pdf">administrative and departmental costs</a> are low, we also need to assess <a href="https://theconversation.com/the-era-assessed-cost-not-rated-and-league-tables-is-there-a-better-way-to-do-it-51865">the costs of university compliance</a> and of playing an effective strategic assessment game.</p>
<p>The new impact and engagement measures redress some of ERA’s deficiencies, but the challenges of cost, transparency, audit culture and external oversight remain. And teaching remains out in the cold.</p><img src="https://counter.theconversation.com/content/52152/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>In 2007 Claire Smith was a member of the Humanities Assessment Panel for the Australian Government's short-lived Research Quality Framework. From 2009-2011 she was a member of the Humanities and Creative Arts Panel, College of Experts, Australian Research Council.</span></em></p><p class="fine-print"><em><span>Dawn Bennett is an ARC Peer Reviewer and an ARC Assessor.</span></em></p>The new impact framework will improve some of the problems arising out of the ERA’s university research audit, but major challenges will remain.Claire Smith, Professor of Archaeology, Flinders UniversityDawn Bennett, Research Professor, Curtin UniversityLicensed as Creative Commons – attribution, no derivatives.