Menu Close

What metrics don’t tell us about the way students learn

A big push is under way in higher education to measure how students are learning and how good lecturers are at teaching them. Universities can track how much time a student spent on a learning module or how often they accessed a journal article or online book. Some universities are starting to use these “learning analytics” to study how students are accessing data. But that is currently all they can do – because of the limits of using this kind of “big data” to measure the effectiveness of teaching and learning.

In the UK, the government has confirmed plans to measure teaching excellence at universities in England via a new Teaching Excellence Framework (TEF). The Queen’s Speech revealed that a new Higher Education and Research Bill will be introduced to take forward regulation around the ideas set out in the higher education white paper.

Currently, the TEF plans to align teaching excellence to university’s scores on the National Student Survey, data on how many students finish their course from the Higher Education Statistics Agency, and on the proportion of graduates in employment using a survey of students conducted six months after they leave university.

Universities will also be able to submit qualitative and quantitative evidence of up to 15 pages to explain and contextualise their metrics. This is where it gets sticky: will the people with the highest quality teaching and learning shine through or will the people with the best stories and prettiest data win in the end?

The fluidity of metrics allows for more wiggle room than the government thinks and that wiggle room will allow for gaming the new system, no matter what the white paper claims. For example, there is the possibility of linking data that measures what has happened with events that may or may not be related – such as tracking a student’s participation in online discussions and their ratings of the way their lecturers use technology.

What metrics miss out

Yet teaching and learning are more than just analytics. It is not possible to measure good teaching by simply looking at lecture attendance or examining how many pages a student read on an e-text.

Current practices in learning analytics are focused on exploring big data, something that students produce en masse. One example of this is keeping track of attendance at lectures, correlating that with the number of hours spent reading an e-textbook, and using that data to predict success on a specific assessment. This can’t be linked to employability, nor can it be linked to the relative excellence of the instructor. Likewise, teaching intensity cannot be linked to a specific number of hours or type of teaching style.

What makes a great teacher? Matej Kastelic/www.shutterstock.com

Research into learning analytics is growing apace but is still nascent – so it is a problem that politicians have decided to use it as a promised messiah to define and measure excellence.

This is not to say that learning analytics are not useful – they are very good at doing specific things that can possibly improve the student experience. For example, metrics can identify students who do not access the class materials or attend the lectures. These students can be taken aside and asked if they need additional support.

But here is the conundrum: there is no empirical data that says that all students who display these behaviours need additional support. Learning analytics are increasingly being seen as a universal panacea for anything that may ail education. But this has not proven true in the last ten years: we have terabytes of big data on student learning but very little empirical research on its actual impact. Outputs and outcomes in terms of lectures attended are not measures of impact on the individual lives of university students.

The introduction of learning analytics as a measure of teaching excellence will have one definitive outcome: spurious correlations. Lest we forget, correlation does not equal causation and the best that learning analytics can currently do is correlate that more years of completed education correlate to a higher graduate earning potential. That is not enough to undermine the years of educational research that stresses the importance of relationships and presence of teachers in the classroom.

Game on

Suggesting that universities use solely qualitative measures to examine teaching and learning is not practical, but there needs to be a balance between what the statistics may reveal and the actual teaching and learning experience.

The government has charged the Higher Education Funding Council for England – now to be subsumed into a new body called UK Research and Innovation – with the task of developing a system of checks and balances to measure teaching excellence so that universities do not try and game the system. These measures are slated to go live in year three of the TEF roll out.

The next three years are likely to see a rash of university policy and practice that will not encourage collegiality – nor will it help to build bridges between innovative teaching practice and quality learning. Instead it may produce the same wheeling and dealing that the Research Excellence Framework does, except this will be much more frequent. The game has officially changed.

Want to write?

Write an article and join a growing community of more than 182,100 academics and researchers from 4,941 institutions.

Register now