Menu Close

Research meets the real world – now comes the hard part

Measuring the impact of research in the real world sounds a bit impossible, but Australia is going to do it anyway. Research image from www.shutterstock.com

So you’re just recovering from the last ERA (Excellence for Research Australia) assessment? Dust yourself off, Excellence in Innovation for Australia (EIA) is heading our way.

This is the new paradigm for measuring university research. It not only looks at traditional indicators like the rate of citations in journals but it will also look directly at how well society understands and applies the research.

There’s often a clear path from medical and engineering research to “real world” outcomes. But is it feasible for the humanities and social sciences to do the same?

If you think this is a lofty question, think again. The UK is now implementing ways to assess the impact of research from all disciplines and tying it to funding as soon as next year.

Australia, it seems, looks likely to follow.

The story so far…

Australia’s Research Quality Framework, trialled under the Howard Government in Australia but superceded by ERA under the Rudd government, is said to have directly influenced the design of the research impact assessment framework in the UK.

Last year, the federal government used a similar approach when it ran a national trial on research impact with twelve universities. Seventy five panellists, mostly from industry and community organisations, volunteered to carry out this exercise.

Using a five point scale of A, B, C, D and E (sound familiar?) they assessed the impact of research as demonstrated by case studies prepared by the participating universities. “A” corresponded to outstanding impact, all the way down to “E” – limited impact. Happily 87% of case studies were rated as C or B – considerable or more than considerable impact.

But here’s the catch. In the current budget environment, there is little likelihood that universities will get any extra funding from research impact/innovation. Rather, we are likely to see an even greater share (perhaps 20%) of funding made contingent upon achieving high ratings on the assessment scale.

This is likely to be overlaid on top of the extra costs involved in implementing the assessments, which will be conducted independently of ERA.

Who doesn’t like a challenge?

Having said all this, it’s a hard reform to resist isn’t it? As an academic at an innovative university, I am personally pleased with this emphasis on applied knowledge. However, there are at least three problems with the development of yet another national research assessment exercise.

First, the funding pie is shrinking dramatically. The Gillard government has just announced A$2.3 billion of federal government cuts to higher education. This brings total federal cuts in higher education budgets to $4 billion since 2011, making an exercise of this scale harder to afford.

Second, some knowledge is too theoretical to be applied directly or immediately to the real world (yet is essential to the development of our understanding of the world or indeed the universe).

Third, there are very different demands involved in striving for research impact and research excellence, as defined by ERA and EIA. Since end users don’t generally read academic journals, a layer of extra knowledge dissemination and dialogue with the community is required for the IEA.

Budgets and the workforce

Getting down to tin tacks, the reality is that university budgets are now so squeezed, that finding time and resources for yet another major compliance exercise and associated behaviour changes will be an immense challenge.

Within this is the challenge of how to manage an ageing academic workforce that is still confused over how to spend its precious research time. In the current context, we are talking about time that is being severely eroded by administration, compliance regimes, expanding class sizes, rising publication rejection rates, ever fiercer competition for grants and the growing demands for industry and community engagement.

There is also the question of whether deans will continue to insist that staff spend their time only pursuing A* journal publications and ARC grants (as per the ERA requirements). Or will there be acceptance that research impact requires a range of industry and community networks for dissemination of our research into the real world?

Jack of all trades

The harsh reality is that some scholars are by temperament best suited to the lab or the lecture theatre and should never be let near an industry partner. Others will be great advocates or poster boys/girls for university research, but may not stack up in the academic quality game.

Some will prefer to stick with their teaching, and a few highly talented souls will somehow embody all the necessary attributes to meet both the research quality and research impact demands.

Increasingly, university administrators will need to recognise the strengths of particular academics and play to those strengths, rather than expecting all academics to play all the games successfully at one time.

Make no mistake, this will mean having to count and possibly even reward non-listed and unranked or low impact publications, providing time and money to attend “external” forums, even promoting academics on the evidence of their knowledge dissemination and research impact.

It will also require some sophisticated explanation of the difference between potentially achieving a 3 (world standard) or higher ranking on the ERA research quality scale, and maybe a D or E (modest or negligible impact) on the EIA scale.

Finally, if we are to do all this, we must insist that the government provides the resources needed.

Want to write?

Write an article and join a growing community of more than 182,000 academics and researchers from 4,940 institutions.

Register now