Menu Close

What REF case studies reveal on measuring research impact

Proof that research leaves a mark. Impact via Phatic-Photography from www.shutterstock.com

Understanding and rewarding research impact – where research has benefits or makes broader contributions to society outside higher education – has never been as topical or important as it is today. Now a new study into the way universities achieve research impact has shown there is no simple way of measuring the diversity of that impact or calculating its value.

The UK undertakes a systematic evaluation of university research every five years – the largest research evaluation exercise of its kind anywhere in the world. In 2014, the Research Evaluation Framework (REF) assessed universities on the basis of the quality of research outputs, vitality of the research environment and, for the first time, the wider impact of research.

The weighting for the impact assessment part of the REF was 20% of the total assessment – influencing the allocation of around £1.6 billion worth of public funding over the next five years.

Mining the REF case studies

Impact in the REF was assessed through the submission of case studies using two criteria: reach – the spread or breadth of influence or effect on the relevant constituencies, and significance – the intensity of the influence or effect. The case studies, now available via an online database, provide an extraordinary resource for those interested in analysing knowledge translation and research impact.

Our analysis of the 6,679 non-redacted case studies identified a number of observations. The impact of UK research is truly global with every country in the world mentioned. Different types of universities specialise in different types of impacts. For example, the larger universities made a disproportionate contribution to impact topics such as clinical guidance and dentistry, while smaller institutions specialised in areas such sport, regional innovation and enterprise, arts and culture. The research underpinning impact is multidisciplinary and the benefits arising from that research have many different kinds of impact at once.

The bulk of analysis of the case studies was undertaken using text-mining techniques – the analysis of data contained in natural language text. There were some 6.2m words in the case studies submitted to REF, and by removing words such as “and”, “but” and “or” we were left with 3.7m words.

This text-mining exercise led to the identification of 60 impact topics or areas where research influences society, such as medical ethics, climate change, clinical guidance and women, gender and minorities. The text mining was supplemented by “deep mines” where more than 1,000 case studies were read to provide a deeper picture of the data.

The case studies provide a rich resource, demonstrating the breadth and depth of research impact – in a way that has not been revealed before. Universities claim changes and benefits to the economy, society culture, public policy and services, health, the environment and quality of life.

To pick just two from a multitude of examples, one case study from UCL highlighted how a series of platinum-based lung cancer trials has led to “wide-scale changes in clinical practice”, while another from Lancaster focused on the impact of fathers and the family unit, led to a “tenfold increase in funding for work with fathers in Children’s Centres”.

Metrics for impact remain a challenge

Although an incredibly rich resource, there are challenges in using case studies to measure research impact and improvements that could be made for the next REF in 2020. We discovered the case studies contained more than 3,700 individual pathways to impact – thus presenting a real challenge to anyone interested in producing metrics on impact. Qualitative case studies capture the diverse connections between research and society and it’s difficult to reduce this diversity to numbers.

The numerical evidence supporting claims for impact were also diverse and inconsistent, suggesting that the development of robust impact metrics is unlikely. While the impact case studies provide a rich resource for analysis, the information is collected for assessment purposes and may need to be aligned for further, fruitful analysis purposes.

For example, different universities used different ways to measure the same thing when trying to show impact. Financial information was expressed in various currencies – and researchers used different metrics for expressing online statistics such as downloads and page views.

We initially thought it would be possible to use case studies to estimate the returns on investment from research – for example, by calculating wealth or jobs created, or health benefits or lives saved. But early on in our analysis it became apparent that such an approach was not feasible, because of the large volume of inconsistently used numerical data.

No impact case study looks the same. Different pills via Vorobyeva/www.shutterstock.com

One area of future research could be to look at the language and sentiments used in different case studies and by different disciplines. By reading large numbers of case studies, the contrast between the empirical language of scientists and the more descriptive tone of some humanities scholars became clear and we need to understand further how this relates to the types of impact reported.

There are a few “tweaks” that can be made to the process to help address these challenges and, while metrics are important, we shouldn’t lose sight of the power of the case study. It is also important that we don’t overly restrict researchers in how they can demonstrate their impact – especially when such a broad range of disciplines is included.

Want to write?

Write an article and join a growing community of more than 182,100 academics and researchers from 4,941 institutions.

Register now