Image 20151204 29696 1nit8j4.jpg?ixlib=rb 1.1

Watt report suggests financial incentives for measuring research impact

Research must have an impact – but what’s the best way to measure it? www.shutterstock.com

Watt report suggests financial incentives for measuring research impact

A government report on research funding and policy recommends introducing a financial incentive to ensure university research benefits society and business.

The report by Ian Watt, a former secretary of the Department of Prime Minister and Cabinet, says that producing high quality research alone is not enough considering the investment that is made in Australia’s universities – research must also have an impact.

It suggests designing a new “impact and engagement framework” as a way to measure research impact. This would sit alongside the Excellence in Research for Australia (ERA) system, which measures research quality.

It’s not clear exactly how impact would be measured, but the report suggests combining a set of engagement metrics – such as research income – with case studies and expert review. The report advises assessing research every three years, starting with a pilot study in 2017.

Les Field, professor and secretary for science policy at Australian Academy of Sciences, welcomes the idea of additional targeted funding for research connections and opportunities for research students to gain experience in industry during the course of their studies, but says it’s “disappointing that the review did not address the urgent need to fund the full cost of research”.

John Fitzgerald, president of the Australian Academy of the Humanities, says: “Effective impact measures would need to include indicators of social innovation, community engagement and cultural enrichment in addition to research commercialisation.”

He urges for a new framework “not to add greatly to the burden of reporting compliance for the university sector”.

The report cautiously seeks advice from the UK’s Research Excellence Framework (REF) – established in 2014 as a way to assess research quality and impact, which uses a mixture of case studies and expert panels to measure research impact.

This model has been widely criticised for being costly – it is estimated to have cost the UK around A$420 million - and highly time consuming and bureaucratic.

The UK published over 6,500 case studies. It took on average around 30 days to prepare each study – including training time – with some having to be written more than 10 times, and in one extreme example, 30 times.

Tim Cahill, director of Research Strategies Australia, specialising in higher education research policy, advises against introducing case studies for measuring impact in Australia.

He says: “The value of case studies is what we can learn from them. The UK has already produced thousands of case studies that we can use – are we going to learn anything new by producing hundreds or even thousands of our own?”

The report also recommends changing research block grants from six schemes to two, and changing the Australian Research Council (ARC) Linkage Projects scheme – which funds organisations that support research that involves risk or innovation – so that it moves from one round per year to a continuous application and peer assessment process from 1 July 2016.

Field says: “Making ARC linkage grant processes easier and more attractive will give a shot in the arm to the relationship between industry and research organisations, helping them to work together to solve Australia’s problems.”

Some of the ideas in the report are expected to come up in the Innovation statement due out on Monday, which is set to announce major changes to the way academic research is funded.

Declaration: Tim Cahill is currently consulting to The Conversation on research engagement metrics.