tag:theconversation.com,2011:/global/topics/measuring-impact-of-research-5/articlesMeasuring impact of research – The Conversation2012-12-06T00:23:00Ztag:theconversation.com,2011:article/111662012-12-06T00:23:00Z2012-12-06T00:23:00ZThe dawning of a new ERA: getting research measurement right<figure><img src="https://images.theconversation.com/files/18363/original/zfsm3bqc-1354750766.jpg?ixlib=rb-1.1.0&rect=8%2C7%2C986%2C742&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Measuring the quality and impact of university research is notoriously difficult but it's time to watch this space.</span> <span class="attribution"><span class="source">Measuring image from www.shutterstock.com</span></span></figcaption></figure><p>Before <a href="http://www.arc.gov.au/era/era_2012/outcomes_2012.htm">this morning’s release</a> of the Excellence in Research Australia (ERA) report, the scheme’s champion Aidan Byrne <a href="http://www.theaustralian.com.au/national-affairs/opinion/group-of-eight-do-well-but-excellence-elsewhere-too/story-e6frgd0x-1226530028270">flagged</a> that it could soon be looking at more than just research quality.</p>
<p>Measuring research impact – the impact research has in the community, for policy and industry – is a relatively new idea. But since the prospect of a research impact assessment raised its head, there has been a fair amount of anxiety in Australian universities - even panic - about what it might mean for funding and reputation.</p>
<p>The ERA scheme was established by the Australian Research Council (ARC) a couple of years ago to measure research excellence in Australian institutions through measures like citations in journals. But as the ERA has developed it has included more varied measures and the more recent Excellence in Innovation for Australia (EIA) <a href="http://www.go8.edu.au/__documents/go8-policy-analysis/2012/atn-go8-report-web-pdf.pdf">trial report</a> has added to the mix by looking at the option of <a href="https://theconversation.com/research-impact-can-be-measured-through-case-studies-uts-research-head-11036">using case-studies</a> judged by expert panels to assess research impact.</p>
<p>Of course, aiming for quality, high impact research makes sense, but these measures as well as any new ones need to consider the full range of academic endeavours.</p>
<p>The academic community is understandably apprehensive about new types of research measurements. It could be yet another task they will have to juggle in their already saturated and pressured schedules. </p>
<p>But will the foreshadowed research impact factor really change the way research is being done?</p>
<h2>Getting the measure</h2>
<p>If done right, another measurement that caters for research impact could offer opportunities for many researchers to capture formally the details of the great initiatives they already have in place. Researchers already connect with communities and industry sectors, bend the ear of every level of government, and guide and inform various organisations – so, why not record this?</p>
<p>Most good, savvy researchers engage in activities that would be “counted” under an impact assessment. Keeping research partners happy (<a href="http://theresearchwhisperer.wordpress.com/2012/01/17/community/">such as community organisations</a>) is the time-consuming and meticulous kind of work that could be recorded as part of impact statements. </p>
<p>Industry and community collaborators can also advocate for the strong impact and value of projects they are involved in. </p>
<p>In the post-EIA future, the costs of ensuring sustained impact and connection may be included in grant proposals (e.g. project managers in dedicated roles, industry and community liaison staff).</p>
<p>Director of the ARC Centre of Excellence at QUT, Stuart Cunningham rightly points out that <a href="http://www.theaustralian.com.au/higher-education/research-reviews-pressure-academics/story-e6frgcjx-1226529925771">areas within the humanities will need to find ways to express “impact”</a> – and the same could be said for fundamental science research, given that the concept of “public value” can be fuzzy.</p>
<p>The increased attention to different metrics (specifically social media and so called “<a href="http://www.timeshighereducation.co.uk/story.asp?storycode=420926">alt-metrics</a>”) opens the way for a more holistic understanding of research project findings and enduring achievements. It’s not just about scholarly publications anymore.</p>
<p>For example, with impact assessment, creative arts academics who exhibit in venues could have a broader canvas, if you will, on which to demonstrate impact. Showing their work at a commercial gallery that is the premier site for their fellow practitioners should be a part of measuring an artist’s sector influence.</p>
<h2>Opening up</h2>
<p>This turn to measuring research impact, which was flagged many years ago when the now-defunct Research Quality Framework (RQF) was being planned under the Howard government, is accompanied by the ARC’s push for open access research. </p>
<p>The requirement that findings derived from federally-funded research must be publicly available means that online repositories and <a href="http://en.wikipedia.org/wiki/Gray_literature">grey literature</a> (which includes government papers and technical reports) will potentially have a higher value and profile than before. </p>
<p>For organisationally diverse research resource sites (such as <a href="http://apo.org.au/">Australian Policy Online</a> or The Conversation), this is good news.</p>
<h2>Keeping up the good work</h2>
<p>As mentioned earlier, many researchers already solicit and collaborate with industry and community representatives, creative and cultural producers, and disseminate their work in the public sphere. </p>
<p>The EIA may not require them to do much more than keep up the good work, and find compelling ways to express what they’ve done. Chances are that it won’t entail doing their research work any differently.</p>
<p>The risk comes if the EIA exercise does not elicit and implement a broad enough range of measures, or are overly driven by the applied aspects of academic work.</p>
<p>Research has a crucial effect on a society’s cultural and intellectual well-being, as is argued every time the humanities and social sciences come under attack for being pointless or wasteful. </p>
<p>In arguing the case for humanities research and basic research more generally, the University of Melbourne’s Vice-Chancellor Glyn Davis’ in an all staff email said that “research is not academic self-indulgence”. This is worth affirming, and I would also argue that neither should research be conceived of as an extension of industry’s demands.</p>
<p>Forging a more nuanced and embracing understanding of economic and societal impact in Australia can only strengthen the research sector. </p>
<p>In developing a process, let’s be inclusive, considerate of academics’ time, and also pay attention to the possible negative impacts of research. There’s clearly still lots to learn.</p>
<p><strong>Further reading:</strong></p>
<ul>
<li><a href="https://theconversation.com/era-results-medical-research-is-australias-best-11183">ERA results: medical research is Australia’s best</a></li>
</ul><img src="https://counter.theconversation.com/content/11166/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Tseen Khoo works for RMIT University. She has previously received funding from the ARC for a Discovery project.</span></em></p>Before this morning’s release of the Excellence in Research Australia (ERA) report, the scheme’s champion Aidan Byrne flagged that it could soon be looking at more than just research quality. Measuring…Tseen Khoo, Senior Advisor, Research Development, RMIT UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/71712012-06-13T04:24:32Z2012-06-13T04:24:32ZThinking for money: moral questions for Australian research<figure><img src="https://images.theconversation.com/files/11252/original/65z3f4mb-1338440972.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Allocating research grants based on past projects and potential profits is immoral – it skews research and damages the academic psyche.</span> <span class="attribution"><span class="source">URBAN ARTefakte</span></span></figcaption></figure><p><strong>WHAT IS AUSTRALIA FOR? Australia is no longer small, remote or isolated. It’s time to ask What Is Australia For?, and to acknowledge the wealth of resources we have beyond mining. Over the next two weeks The Conversation, in conjunction with <a href="http://griffithreview.com/provocations">Griffith REVIEW</a>, is publishing a series of provocations. Our authors are asking the big questions to encourage a robust national discussion about a new Australian identity that reflects our national, regional and global roles.</strong></p>
<p>The highest hopes for Australia are based on our intellectual capacity. We already have a substantial profile in education and research, underpinned by a vigorous culture of independent debate which promotes original scientific ideas, as well as theory and analytical narrative in the humanities. </p>
<p>So Australia is good for thinking. But I wonder: is it good for research? When it comes to how we do research – which perhaps represents the pinnacle of thinking – what moral, creative and cultural leadership does Australian research management offer?</p>
<p>The <a href="http://www.arc.gov.au/applicants/certification.htm">criteria</a> that the <a href="http://www.arc.gov.au/">Australian Research Council</a> uses for evaluating applications (itself mirrored in numerous other research selection and evaluation processes) presents a potential moral deficiency. A very large proportion of the ARC’s judgement is attributed to the applicant’s track record, prompting the question: is it fair?</p>
<p>Imagine an undergraduate marking rubric where 40% of the grade is attributed to the marks that you got in your previous essays. Throughout secondary and tertiary education, we scrupulously hold to the principle that the work of the student is judged without prejudice on the basis of the quality of the work. The idea that we might be influenced by the student’s grade point average is preposterous.</p>
<p>Research managers would argue that grant processes are not about assessing research but assessing a proposal for future research. Proposals are funded on the basis of past research – which is reckoned to be predictive – as well as ideas for new work; and this prospective element makes it more analogous to a scholarship, which is decided on the basis of past scores in undergraduate performance. </p>
<p>But the problem with this logic is that each of those past scores from school to honours is established on the basis of fully independent evaluations (where at no stage is past performance counted), whereas many of the metrics used in research have a dirty component of past evaluations contaminating fresh judgements. </p>
<p>Another angle might be to compare research grants with employment. As with a selection process for appointing applicants to an academic post, we are happy to aggregate the judgement of others in previous evaluations; we assiduously examine the CV and we assume that previous judgements were independent in the first place. </p>
<p>But a good selection panel will take the track record with due scepticism; after all, dull and uncreative souls could walk through the door with a great track record. If the selection panel is earnest about employing the best applicant, its members will read the papers or books or musical scores or whatever the applicant claims to have done, irrespective of where they are published, on the principle that you cannot judge a book by its cover. </p>
<p>The only reason that research panels attribute 40% weighting to track record is so as not to have to make a fully independent evaluation and take responsibility for it. But if, as an art critic, I relied on track record for even 10% of my judgement, I would be considered incompetent and ineligible for the job. It would be professionally derelict to stand in front of an artwork and allow my perception to be swayed by the artist’s CV. My judgement must absolutely not defer to anyone else’s, even to a small percentage. </p>
<p>My concern is not with the ARC, which is no worse than other funding bodies. My concern is with research management as an arbitrary code across Australian institutions, which is less than creative and open to moral questions. The fortune of institutions is understandably tied to their research. But how do we know what research is encouraged or discouraged?</p>
<p>How do we count research – which has been the subject of <a href="http://www.arc.gov.au/era/">Excellence in Research for Australia</a> (ERA) – when the measure is likely to dictate research production and promote research in its image? Sadly, while the ERA had the potential to realise an unprejudiced and independent evaluation exercise, it adopted the prior evaluation dependency which characterises most processes in research management. In 2010 and 2012, the <a href="http://www.arc.gov.au/era/era_2010/outcomes_2010.htm">ERA evaluations</a> were informed, among other things, by “Indicators of research quality” and “Indicators of research volume and activity”. Amazingly, research income featured in both of these measures. Even volume and activity are measured by income.</p>
<p>Research income is the major driver for institutional funding and is a key indicator in various league tables. Research income is also used to determine all kinds of benefits, such as <a href="http://www.innovation.gov.au/RESEARCH/RESEARCHBLOCKGRANTS/Pages/ResearchTrainingScheme.aspx">Research Training Scheme</a> places and scholarships for research graduates. So here is the same problem again. We judge merit by a deferred evaluation, in this case according to the grants that the research has been able to attract. It entrenches past judgement on criteria which may be fair or relatively arbitrary.</p>
<p>The grant metric is applied in various contexts with little inflection beyond benchmarking according to disciplines. In any given field, academics are routinely berated for not attracting research funding, even when they do not need it. They are reproached for not pursuing aggressively whatever funds might be available in the discipline and which their competitors have secured instead. As a result, their research, however prolific or original in its output, is deemed to be less competitive than the work of scholars who have gained grants. So their chances at promotion (or even, sometimes, job retention) are slimmer. Such scholars live, effectively, in a long research shadow, punished for their failure to get funding, even when the intellectual incentives to do so are absent.</p>
<p>Directing a scholar’s research by these measures might be suspected of being not only somewhat illogical but immoral. On average, the institution already directs more than a third of the salary of a teaching-and-research appointment toward research. That percentage should be enough to write learned articles and books, if that is the kind of research that a scholar does. </p>
<p>In certain fields, the only reason one might want a grant would be to avoid teaching or administration. But most good researchers enjoy teaching and think of it as immensely rewarding, a nexus which, in any other circumstance, we should be trying to cultivate.</p>
<p>To get out of administrative duties may be more admirable; however, even a $30,000 grant entails considerable administration, and with larger grants there is more employment, and thus more administrative work. You end up with more paperwork, not less, if you win a grant. The incentives to gain a grant are much less conspicuous than the agonies of preparing the applications, which tie the researcher into a manipulative game with little intrinsic reward and a great likelihood of failure and even humiliation by cantankerous competitive peers.</p>
<p>Because the natural incentives are absent, the unwilling academic has to be compelled by targets put into some managerial performance development instrument, where the need for achieving a grant is officially established and the scholar’s progress toward gaining it is monitored.</p>
<p>As a means of wasting time, this process has few equivalents; but if it were only wasteful, we could dismiss it as merely a clumsy bureaucratic incumbency that arises in any institution that has policies. But after a long period of witnessing the consequences (formerly as one of those academic managers) I suspect this wasteful system may also be morally dubious, because its inefficiencies are so institutionalised as to disadvantage researchers who are honourably efficient.</p>
<p>As a measure of the prowess of research, research income has a corrosive effect on the confidence of whole areas and academics who, for one reason or another, are unlikely to score grants. Research income is a fetishised figure – it is a number without a denominator. If I want to judge a heater, I do not just measure the energy that it consumes but the output that it generates as well; because these two figures stand in a telling relationship to one another: the one figure can become the denominator of the other to yield a further figure representing its efficiency.</p>
<p>To pursue this analogy, research management examines the heater by adding (or possibly multiplying) the input and the output. In search of a denominator, it then asks how many people own the heater and bask in its warmth. Similarly, we find out how many people generated the aggregated income and output. Sure enough, we attribute the research to people. But the figure is structurally proportional to income and therefore does not measure efficiency.</p>
<p>I question the moral basis of this wilful disregard for efficiency. Research management does not want to reward research efficiency and refuses to recognise this concept throughout the system. For instance, the scholar who produces a learned book or several articles every two years using nothing but salary is more efficient than another scholar who produces similar output with the aid of a grant. </p>
<p>If, suddenly, research efficiency became a factor in the formula – do not hold your breath – institutions would instantly scramble to revise all their performance management instruments. Not because it is right but because there seems to be no moral dimension to research management, only a reflex response to any arbitrary metric set by a capricious king. Individual cells of research management will do not what is right for research and knowledge and the betterment of the human or planetary condition but whatever achieves a higher ranking for their host institutions. </p>
<p>It is commonly believed that research income as an indicator of quality is at least an economical metric, if not always fair. We tend to view such matters in a pragmatic spirit because we cannot see them in an ethical spirit. On the quality of funded research, I am personally agnostic because, when all is said and done, there is no basis for faith. There may be a strong link between research income and research quality, or there may be a weak or even inverse link, depending on the discipline and, above all, how we judge it. </p>
<p>Perhaps, being circumspect, one could say research management is not immoral, so much as amoral, in the sense that it may be free of ethical judgement. But any argument to unburden the field from moral judgement is not persuasive. Research management is never in a position where it can be amoral, because it concerns the distribution of assets that favour and yield advantages, and being outside the sphere of moral judgement is not an option.</p>
<p>It is good that we have research grants, because they allow research – especially expensive research – to prosper more than it otherwise would; but the terms of managing research, which rely so heavily on a chain of deferred judgements and which yield invidious and illogical rankings, involve processes of dubious moral assumptions. We can accept that research management is inexact and messy. None of that makes it ugly or immoral, just patchy and occasionally wrong. But the structural problems with research management go further; they skew research and damage the academic psyche.</p>
<p>Lecturers commence their academic career as researchers and, from early days, are researchers at heart. They love research: they become staff by virtue of doing a research degree and are cultivated thanks to their research potential and enthusiasm. Bit by bit, and with many ups and downs, they divide into winners and losers: a small proportion of researchers who achieve prestigious grants and a larger proportion who resolve to continue with their research plans on the basis of salary, perhaps with participation in other workers’ funded projects and perhaps with a feeling of inadequacy, in spite of their publications, sometimes promoted by pressure from their supervisor. </p>
<p>Within this stressful scenario, even the successful suffer anxiety; and for the demographic as a whole, the dead hand of research management makes them anxious about their performance. In relatively few years, academics become scared of research and see it as more threatening than joyful; they pursue it with an oppressive sense of their shortcomings, where their progress is measured by artificial criteria devised to make them unsettled and hungry.</p>
<p>Though we dress up this negotiation in the language of encouragement, it is structurally an abusive power relationship that demoralises too many good souls in too little time. It is not as if we do not know about this attrition of spirit, that many academics get exhausted and opt out of research with compound frustration for good reasons.</p>
<p>Research management, which governs the innovative thinking of science and the humanities, is neither scientific nor humane nor innovative; and my question, putting all of this together, is whether or not it can be considered moral or in any way progressive to match the hopes that we have in research itself. A system of grants, however arbitrary, is not immoral on its own, provided that it is not coupled to other conditions that affect a scholar’s career. </p>
<p>This process of uncoupling research evaluation from grant income on the one hand and future intellectual opportunities on the other seems necessary to its moral probity. Is it ethically proper to continue rating researchers by their grant income simply because it is convenient in yielding a metric for research evaluation? The crusade to evaluate research has been conducted on a peremptory basis, either heedless of its damaging consequences or smug in the bossy persuasion that greater hunger will make Australian research more internationally competitive.</p>
<p>Is such a system, so ingeniously contrived to spoil the spirits of so many researchers, likely to enhance Australia’s competitiveness? We were told at the beginning of the research evaluation exercise that the public has a right to know that the research it funds is excellent. But after so many formerly noble institutions have debased themselves by manipulating their data sets toward a flattering figure, we have no more assurance of quality than we did before evaluating it. </p>
<p>The conspicuous public attitude to research is respect and admiration, bordering on deference. So I wonder if there is any justifiable basis for research evaluation other than to provide the illusion of managerialism, or perhaps a misguided ideology that identifies hunger and anxiety as promoting productivity? I see massive disadvantages in our systems of evaluation but fail to see any advantages.</p>
<p>To maintain this disenfranchising system in the knowledge of its withering effect strikes me as morally unhappy and spiritually destructive. It would take a diabolical imagination to come up with a system better contrived to wreck the spirits of so many good researchers and dishearten them with their own achievement</p>
<p>The system needs to be rebuilt from the ground up and on the principle that dignifies the generosity and efficiency of researchers. I look forward to a time when the faith that the public has in our research is matched by the faith that researchers themselves have in the structures that manage them.</p>
<p><em>Read more provocations at <a href="https://theconversation.com/topics/what-is-australia-for">The Conversation</a> and at <a href="http://griffithreview.com/provocations">Griffith REVIEW</a>.</em></p><img src="https://counter.theconversation.com/content/7171/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Robert Nelson has on occasion received funding from research bodies.</span></em></p>WHAT IS AUSTRALIA FOR? Australia is no longer small, remote or isolated. It’s time to ask What Is Australia For?, and to acknowledge the wealth of resources we have beyond mining. Over the next two weeks…Robert Nelson, Associate Director Student Experience, Monash UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/74532012-06-05T23:29:59Z2012-06-05T23:29:59ZHigh impact: how the story of research can be told better<figure><img src="https://images.theconversation.com/files/11407/original/zdmm8ymb-1338879870.jpg?ixlib=rb-1.1.0&rect=34%2C34%2C577%2C404&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Real impact is important when considering how to fund research.</span> <span class="attribution"><span class="source">Flickr/Mozzer502</span></span></figcaption></figure><p>When it comes to engaging with industry, government and the broader community, there is one secret weapon that is often overlooked in the university sector – the humble story. The art of storytelling is not one the sector is particularly proficient at, and nowhere is that more evident than in the world of university research. </p>
<p>Ask the average person on the street if they consider research funding a priority and you’d be hard pressed to find many (if any) who would say yes. Ask that same person again if they value research on brain, breast and prostate cancer, clean energy, information technology, and engineering and you will no doubt get a very different answer. </p>
<p>In 2010, according to the Australian Bureau of Statistics (ABS), universities spent $8.2 billion on research and development – with funding largely sourced from government and industry. But it seems the community in general does not appear to draw the connection between how and why research is funded and the kind of research undertaken in our universities. </p>
<p>Linking research with research outcomes is imperative for industry, governments and the community to understand and see value in university research.</p>
<h2>A new approach</h2>
<p>In an effort to address this issue – and in a national first - research undertaken in 30 per cent of Australia’s universities will be assessed for the impact it has.</p>
<p>Twelve Australian universities will participate in the three month trial - known as the Excellence in Innovation (EIA) initiative – headed by the Australian Technology Network of Universities (ATN) and the Group of Eight (Go8).</p>
<p>A key focus of the trial will be on the narrative of research. This may be dismissed by the academic community as irrelevant but this is often critical to ensuring a clear understanding of the impact a piece of research may have.</p>
<p>Another key feature will be the use of industry stakeholders in the assessment process - not just academic experts. Whilst these experts will play a key role, they will not form the majority membership of the panels established to undertake the assessment. The research will also be assessed against <a href="http://www.abs.gov.au/ausstats/abs@.nsf/0/CF7ADB06FA2DFD69CA2574180004CB82?opendocument">Socio-Economic Objectives</a> as outlined by the ABS, rather than the traditional <a href="http://www.abs.gov.au/ausstats/abs@.nsf/0/6BB427AB9696C225CA2574180004463E?opendocument">Fields of Research</a>.</p>
<h2>The story of research</h2>
<p>In this way, the trial is a radical departure from the traditional tools used in research assessment currently undertaken by Government. The tradition of peer-review focuses on what one academic thinks of another’s research and on indicators derived directly from the research. </p>
<p>While that has its place, in this exercise researchers are being asked to focus on a clearly identified impact or public good and then explain how their research contributed to this outcome - telling the story via a succinct case study. As far as is possible this case study should be free from jargon so that it can be understood by those outside the research area being evaluated. </p>
<p>Those of us who believe in the power of research know that there will be many stories where university knowledge has delivered benefits to the Australian community through new technologies, jobs, health outcomes, increased security for Australia or by contributing to socially cohesive communities.</p>
<h2>Real-world results</h2>
<p>In this way it is hoped the EIA will be the beginning of a process whereby universities achieve a greater understanding of the outcomes that government, business and the community value and how research can contribute. </p>
<p>This engagement is critical to Australia’s competitiveness in a global economy. Without the “innovation economy” that this engagement creates, Australia will struggle to sustain its standard of living and maintain its place in the world. </p>
<p>This emphasis on applied research should not be at the expense of the academic excellence in our Higher Education system but a complement to it. Innovative economies are backed by universities that register highly on both scales of academic excellence and industry engagement. This is where the Australian Higher Education sector needs to be.</p><img src="https://counter.theconversation.com/content/7453/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Vicki Thomson is Executive Director, Development & Communications at Australian Technology Network of Universities.</span></em></p>When it comes to engaging with industry, government and the broader community, there is one secret weapon that is often overlooked in the university sector – the humble story. The art of storytelling is…Vicki Thomson, Executive Director, Development & Communications, Australian Technology Network of UniversitiesLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/68202012-05-04T02:32:36Z2012-05-04T02:32:36ZThe ‘impact’ of research carries weight (but ripples matter more)<figure><img src="https://images.theconversation.com/files/10293/original/9zymrswc-1336008537.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">Do we need new vocabulary for measuring the “engagement”, “use”, “relevance” and “appropriateness” of research?</span> <span class="attribution"><span class="source">spettacolopuro</span></span></figcaption></figure><p>What has been the impact of the <a href="http://galileo.rice.edu/sci/instruments/telescope.html">invention of the telescope</a>? What has been the impact of <a href="https://theconversation.com/explainer-einsteins-theory-of-general-relativity-3481">Einstein’s Theory of General Relativity</a>, or the <a href="http://www.treehugger.com/corporate-responsibility/man-arrested-for-attempting-to-split-the-atom-in-his-kitchen.html">splitting of the atom</a>? </p>
<p>Yes, that’s right: the idea of measuring the “impact” of research is back in a big way. Within the research community and within government, plenty of people are <a href="https://theconversation.com/group-of-eight-view-of-measuring-the-impact-of-research-4818">thinking about this</a> in 2012. </p>
<p>As many have acknowledged, the Federal Government’s current <a href="http://www.arc.gov.au/era/">Excellence in Research for Australia</a> (ERA) initiative provides a <a href="http://www.atn.edu.au/atnconference/2011/atn-go8_symposium/report_of_2011_atn-go8_symposium.pdf">strong evaluation</a> of the quality of the research conducted in Australian universities, but doesn’t necessarily tell us much at all about the impacts of this research in the broader community. </p>
<p>The government’s 2011 review, Focusing Australia’s Publicly Funded Research, <a href="http://www.innovation.gov.au/Research/Pages/FocusingAustraliasPubliclyFundedResearch.aspx">recommended</a> a feasibility study be undertaken by the Department of Industry, Innovation, Science, Research and Tertiary Education on “possible approaches for developing a rigorous, transparent, system wide Australian research impact assessment mechanism”. </p>
<p>This will build upon work already underway across the university sector and in <a href="https://theconversation.com/institutions/csiro">CSIRO</a>. </p>
<p>Making more of an effort to understand how research interacts with the broader community is – to state our opinion up front – A Good Thing. It promotes thinking about the outside world – encouraging engagement beyond a particular academic discipline and awareness of the interests of the people actually funding our work, and the issues they might deem important. </p>
<p>It also focuses effort on clearly articulating the many ways in which our investments in research deliver benefits for society.</p>
<p>Yet perhaps in this nascent discussion about impact we have put the cart before the horse. Perhaps we have allowed the conversation to get away from us before we’ve had a chance to think through what it is we actually want to achieve in our governance of the Australian research system, and what we want to measure and reward. When it comes down to it, is “impact” even the right word? </p>
<p>“Impact” sounds like a concept from the world of physics – a scientisation of the very language we might use to talk about research and its place in society. “Impact” seems to denote a process that can be rational, can be measured – where bigger would equal better. </p>
<p>It also seems to describe a singular effect from research activity – someone does lots of work, and then there is an impact. Bang. Done. </p>
<p>But isn’t the age of linear cause and effect supposed to be over? Aren’t we supposed to be living in a more complicated, more contingent age of overlapping fields, where innovation happens at the boundaries? </p>
<p>To talk of “impact” in a singular, physical way is to slip back to a simple linear model of research and innovation. The dominant measures of the “impact” of research and innovation – dollars, people, publications and patents – still reinforce this model. </p>
<p>The problem is, <a href="http://sciencepolicy.colorado.edu/publications/special/honest_broker/index.html">decades of research on research and innovation</a> have shown that the process is neither this simple nor this linear.</p>
<p>And, of course, impact isn’t either. Research is part of, and contributes to, the complicated and overlapping worlds of human affairs. It shapes, and is shaped by, broader society. The tentacles of impact stretch into the past and far off into the future. </p>
<p>Which is not to say impact cannot be measured at all. We believe there are many opportunities to enhance the metrics of research and innovation, and that this is important work – it is crucial that individual researchers, research organisations and governments are engaged in the discussion. </p>
<p>But there are two key points – often overlooked – that must frame how this work progresses. </p>
<p>1) The new knowledge and new tools that stem from research do not create singular, one-off “impact”. Research activity leads to multiple impacts in different locations and different times. </p>
<p>2) Some of these impacts will be seen as positive by certain people in certain places and times, while others will be seen as neutral or even negative. </p>
<p>The word “impact” itself contains no normative assessment, yet many seem to be using it as a synonym for benefit. If we are going to assess research impact systematically, we will need to start to account for multiple impacts. </p>
<p>Consider the impact of the development of the <a href="http://en.wikipedia.org/wiki/Cochlear_implant">cochlear implant</a>. Hundreds of thousands of people have become able to hear, living lives that are (probably) easier and (possibly) richer. How would we measure this? </p>
<p>Much money has been made, and many jobs created. Simultaneously, many in the deaf community have come to see the technology as a form of “<a href="http://archie.kumc.edu/bitstream/handle/2271/848/STT-JUNW_2010_Heffley_Pediatric-Cochlear-Implants.pdf?sequence=1">cultural genocide</a>”. Should this be taken into account when assessing impact? </p>
<p>Researchers have also studied the introduction of new agricultural technologies, such as the <a href="http://news.ucdavis.edu/search/news_detail.lasso?id=7521">tomato harvester</a>, and their social, economic and environmental impacts. </p>
<p>While productivity and profitability rose with the introduction of certain technologies, this was also accompanied by job losses among certain classes of workers and the restructuring of farm holdings, gender roles in the workforce, and regional communities. </p>
<p>All of this raises important questions of accountability. Individual researchers would rightly be nervous about being measured and rewarded against such broad, long-term impacts, over which they have little or no control. So who should be held accountable for what? </p>
<p>If we are seeking to improve our assessment of the impacts of research in the wider community, what is the role for researchers and research organisations, and what is the role for government and the public?</p>
<p>Perhaps we should start by not jumping straight to “impact”. It’s not a simple linear process, but there are some things that happen between research and societal impacts, and perhaps these are things we should start to talk about and measure more. </p>
<p>Things such as “engagement” and “use”, and “relevance” and “appropriateness”. We need to pair the quantitative with the qualitative as we seek to better understand impacts, and develop new measures of engagement and use that go beyond our current – largely scientific and economic – metrics. </p>
<p>It might prove difficult, or even impossible, to answer the question about the full, long-term impacts of a particular piece of research, but it’s important that questions are being asked. </p>
<p>If we stop looking for one single big answer and focus instead on smaller steps along the way, there is a lot that can be done.</p><img src="https://counter.theconversation.com/content/6820/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Will J Grant has received funding from the Department of Industry, Innovation, Science, Research and Tertiary Education.</span></em></p><p class="fine-print"><em><span>The HC Coombs Policy Forum at ANU receives Australian Government funding through the "Enhancing Public Policy" initiative.</span></em></p>What has been the impact of the invention of the telescope? What has been the impact of Einstein’s Theory of General Relativity, or the splitting of the atom? Yes, that’s right: the idea of measuring the…Will J Grant, Researcher / Lecturer, Australian National Centre for the Public Awareness of Science, Australian National UniversityPaul Harris, Deputy Director, HC Coombs Policy Forum, Australian National UniversityLicensed as Creative Commons – attribution, no derivatives.tag:theconversation.com,2011:article/9382011-05-23T04:41:31Z2011-05-23T04:41:31ZIn universities obsessed with research here’s what falls between the cracks<figure><img src="https://images.theconversation.com/files/695/original/Volunteer_fireys_Rob_Down_Under_Flickr.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=496&fit=clip" /><figcaption><span class="caption">The university funding system discourages research on volunteers like these men who are risking their lives to help their community.</span> <span class="attribution"><span class="source">Flickr/Rob Down Under</span></span></figcaption></figure><p>In Australian universities at the moment research is everything. They obsess over the rankings in the new ERA system which measures research performance. For academics publishing in the top journals isn’t just part of playing the game, it’s the whole game. </p>
<p>But there are things that the rankings don’t tell you, and valuable work that research-obsessed university administrators currently don’t recognise. </p>
<p>It’s another example of the measurement system changing the ways people behave, for the worse. And the unintended consequences of this unhealthy research obsession are holding us back.</p>
<h2>It’s the research, stupid</h2>
<p>The accepted way to measure academic performance has become research output. Excellence in Research for Australia (ERA), the government funding agency for research, distributes the resources which shape the way Australia presents its knowledge to the world. </p>
<p>Getting a slice of the $510 million Sustainable Research Excellence program has become the holy grail for many university administrators. But it ignores the hard work being done teaching the next generation.</p>
<p>There is already evidence that research assessment exercises overseas have amplified the swing to more research emphasis in promotion policies. Take this report from Alan Jenkins and Graham Gibbs in the Guardian on 15th August 1995.</p>
<p>“A survey conducted by the Oxford Centre for Staff Development in the UK showed that while 96% of institutions included excellence in teaching amongst promotion criteria, only 11% of promotion decisions were made on teaching grounds. Another 38% of institutions reported never having promoted someone primarily on the grounds of teaching excellence. Written responses included ‘Not in living memory’, and the wonderfully disdainful ‘Not at this institution’. One respondent reported: `There are three criteria for appointment here, research, research and research’”</p>
<p>In practice this may mean further concentrating research in a small number of institutions and perhaps the emergence of “teaching-only” departments or even universities. </p>
<p>Given the federal government’s commitment to increase significantly the participation rate of school leavers in higher education, many of these students may end up in research poor environments. </p>
<p>We risk creating a strange mixture of elitism and egalitarianism. </p>
<p>Universities will be able to use ERA performance to guide the allocation of resources as well as invest in future skills. </p>
<h2>How important is your journal? </h2>
<p>Many people spend their lives getting <em>that</em> paper into <em>that</em> journal. Name dropping matters. To be taken seriously, and enjoy the funding benefits, you have to be published in the key journals in your field. Risk taking is avoided. To get published you have to cite those who have trodden that path before.</p>
<p>But the <a href="http://www.arc.gov.au/era/journal_list_dev.htm">2010 journal rankings</a> seriously devalue various interdisciplinary research fields and could damage Australia’s strong international reputation in these fields.</p>
<p>Interdisciplinary research is often where the breakthroughs come. </p>
<p>So where does that leave those whose work is innovative? Multidisciplinary researchers are literally thinking outside of their academic box, and they’re being penalised. </p>
<p>In areas like the humanities, arts and social sciences, it’s tricky to assess the quality of research in the way that ERA requires. Its way of examining citation lists is not well attuned to measuring interdisciplinary research and cross-sector collaboration.</p>
<p>And that’s not all. International journals are favoured over local ones but they are not necessarily interested in publishing Australia-specific research. Thus, under the ERA, we applaud someone for publishing in an international journal, rather than recognise efforts to contribute knowledge and participate in debates in Australia. </p>
<p>A case in point is the potential effects on research on the not-for-profit sector, where journals with lower impact measures, circulation and much higher acceptance rates are outranking the most longstanding and prestigious international journals in the field.</p>
<h2>Writing off the not-for-profit sector</h2>
<figure class="align-left ">
<img alt="" src="https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=237&fit=clip" srcset="https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=600&h=424&fit=crop&dpr=1 600w, https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=600&h=424&fit=crop&dpr=2 1200w, https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=600&h=424&fit=crop&dpr=3 1800w, https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=754&h=533&fit=crop&dpr=1 754w, https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=30&auto=format&w=754&h=533&fit=crop&dpr=2 1508w, https://images.theconversation.com/files/726/original/School_sport.jpg?ixlib=rb-1.1.0&q=15&auto=format&w=754&h=533&fit=crop&dpr=3 2262w" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px">
<figcaption>
<span class="caption">86% of Australian adults work, volunteer or are a member of a not-for-profit organisation. Yet the role of these organisations are in danger of being ignored by the academic community.</span>
<span class="attribution"><span class="source">AAP/Julian Smith</span></span>
</figcaption>
</figure>
<p>The limitations of ERA in measuring the quality of multidisciplinary research is glaringly obvious in the not-for-profit studies. </p>
<p>It’s a rapidly growing area of research and is at the centre of many contemporary public policy debates.</p>
<p>According to the <a href="http://www.abs.gov.au/Ausstats/abs@.nsf/0/C068946BDCA09FAFCA25749B0017A3D4?OpenDocument">2006/07 ABS data</a>, the Australian not-for-profit sector: </p>
<p>• Had 41,000 economically significant not-for-profit organisations.</p>
<p>• Employed 890,000 people, 8.6% of Australians in employment.</p>
<p>• Had an income of $76 billion.</p>
<p>• Contributed $34 billion, or 3.4%, to GDP. </p>
<p>• Made an economic contribution equivalent to that of the government administration and defence industry and one and a half times the economic contribution of the agriculture industry. </p>
<p>• Had over 13 million Australians (86% of adults) belonging to at least one not-for-profit association, with 48% belonging to at least three.</p>
<p>• Had just under 1 million Australians holding office in a not-for-profit organisation.</p>
<p>But if there is no one in Australia researching not-for-profit or volunteering, it is highly likely that there will be no undergraduate, or even postgraduate courses on these subjects. This potentially under serves 800,000 paid employees and over 6 million volunteers – Australia’s largest workforce!</p>
<h2>The peer review process - an imperfect science</h2>
<p>The panels reviewing journals could be more transparent about how they actually operated and what was done with their recommendations.</p>
<p>In the case of not-for-profit research, all submissions were made through the Australian Business Deans Council (ABDC) joint submission. They ranked the not-for-profit journals appropriately on their own list but they were all then downgraded in the final ERA list. </p>
<p>It is difficult to know what happened. Maybe the ABCD’s advice was rejected or maybe they didn’t defend not-for-profit research strongly enough. I hope the current round of consultation will provide greater access to those groups that might better represent those engaged in multidisciplinary research.</p>
<h2>Why fields of research codes don’t tell the whole story</h2>
<p>If your work falls neatly into a Field of Research code (FoR) you’re in luck. ERA can easily identify what you’re doing. But if you collaborate outside the strict parameters of the FoR and end up with various four-digit research fields a valued ERA score is less easy to reach. This strengthens disciplinary “silos” while multidisciplinary approaches become invisible.</p>
<p>This was the case for non-profit studies. It has no distinct FoR and internationally esteemed multidisciplinary not-for-profit journals were outranked by less prestigious, subject specific and far less cited marketing, management and accounting journals.</p>
<p>A way forward would be to allocate not-for-profit research a FoR code, in much the same way as the ARC has given other multidisciplinary areas a code, such as tourism studies (which has its own category but only employs half the number of workers of the not-for-profit sector). </p>
<p>Other countries have Fields of Research or equivalent that separately identify not-for-profit and voluntary sector studies, so there is a strong case for international compatibility.</p>
<h2>What will happen if we don’t take action? </h2>
<p>Teaching and research are interdependent. Research productivity significantly adds to both the quality and substance of classroom teaching and teaching adds to the quality of research, not least because it allows for the (often valuable) input of students. </p>
<p>The ERA ratings say nothing about teaching excellence. For this reason it is likely that the ERA will further intensify the research culture in many university departments, probably at the expense of teaching. </p>
<p>The division created within departments between researchers and teachers can lead to them being unable to function as communities of scholars. Instead they become the setting of game playing for some and the home of resentment and bitterness for others.</p>
<p>By devaluing the top journals in multidisciplinary fields of research, we are on a path that leads us away from accepted international best practice – just when we need more than ever to ensure that our researchers have international standing. </p>
<p>It is time not-for-profit sector researchers call on the ARC to revise the 2010 Journal Ranking list to recognise this important field of research.</p><img src="https://counter.theconversation.com/content/938/count.gif" alt="The Conversation" width="1" height="1" />
<p class="fine-print"><em><span>Bronwen Dalton has received funding from the ARC.</span></em></p>In Australian universities at the moment research is everything. They obsess over the rankings in the new ERA system which measures research performance. For academics publishing in the top journals isn’t…Bronwen Dalton, Associate professor, Management Discipline Group UTS Business School, University of Technology SydneyLicensed as Creative Commons – attribution, no derivatives.