Menu Close

Students beware: university rankings should come with health warnings

Open days help students work out where to apply. Engineering at Cambridge/flickr, CC BY-NC-ND

As first-year students around the UK arrive to start freshers week and begin university, sixth formers are turning their attention to university applications. In deciding on their choice of university, many students make use of rankings of universities published in the media, such as The Complete University Guide, The Sunday Times Good University Guide or the Times Higher Education World University Rankings, which have just been published. But can such rankings be trusted?

These university rankings, which present a list of institutions in the form of a league table, suggest a precision which is unlikely to be supported by detailed examination of the data. They deliberately draw attention to the performance of each university relative to all others. But often, the methodology is such that the differences between universities, which can appear large, conceal the fact that there are only very small differences in the scores from which the rankings have been derived.

This is illustrated in the graph below using the 2016 rankings from The Complete University Guide. This shows the relationship between a university’s index score – an amalgamation of data used to calculate the rankings – on the vertical axis and its ranking on the horizontal axis. When the dots in the graph below lie on a relatively flat line, there is only a small difference in a university’s score on the index compared to its place in the ranking.

Comparing a university’s ranking on The Complete University Guide and their score on its index. The Complete University Guide, Author provided

For example, there are greater differences between the top six universities (where the difference in index is 74 points) than the subsequent seven universities, ranked 7 to 13 (where the difference in index is only 36 points). Given the premium an institution gets from being able to refer to itself as a “top-ten” university, those in the teens just outside this band (but with scores close to the ones inside the top ten) must find the situation particularly galling.

A range of indicators

There are many dimensions on which universities’ performance can be evaluated and a variety of possible indicators (for UK universities) can be found on websites such as the Higher Education Funding Council for England. Rankings done by the media use these data and other information (such as the National Student Survey) to construct their own league tables.

The Complete University Guide, for example, examines ten aspects of activity likely to be of interest to prospective students: entry standards, student satisfaction, research assessment, research intensity, graduate prospects, student-staff ratio, academic services spend, facilities spend, “good honours” (the proportion of first degree graduates who obtain first and 2:1 degrees) and degree completion. The Times Higher Education World University Rankings, however, differ in that they cover not just the teaching and research activities in their international data set of universities, but also their knowledge transfer and international outlook (which includes campus diversity and collaboration with researchers around the world).

A weighting system is applied to the individual components of the index to produce an overall performance ranking. Publications therefore vary in the weightings applied as well as the underlying indicators used to derive their rankings.

What’s in a weighting?

The Complete University Guide assigns weightings of between 0.5 and 1.5 to the ten individual measures. While the methodology underpinning rankings (in particular how the separate indicators are combined to form an overall indicator) is often clearly provided, the justification of it is not. Publishers of these and other university rankings generally do not explain why they have chosen their weightings, or the fact that other weightings could be equally legitimate, and that a different weighting scheme could produce different rankings.

To illustrate the importance of the weighting system we can examine the relationship of the overall index from The Complete University Guide with each of its ten individual components by performing what’s called a correlation analysis. This looks at how universities fare on each individual measure, compared to how they fare in the actual index. If the index has a strong linear relationship with any underlying component, the correlation value is close to the value one, and if it has a weak relationship the value is close to zero.

A simple analysis reveals that student satisfaction and facilities spend are only weakly related to the overall index of The Complete University Guide (with correlation values of 0.35 and 0.34 resepctively) and hence they are poorly represented by the overall university rankings. Users of rankings for whom these dimensions are of particular interest would therefore appear to be poorly served by the overall ranking.

This finding is not unique: analysis of other rankings in the media leads to similar conclusions. In the Sunday Times University Guide, for example, some of the components used, such as the drop-out rate and, to a lesser extent, student satisfaction, are only weakly related to the overall index.

More weight

Rankings are becoming increasingly important to individual universities. National and global rankings can be used by other institutions to identify suitable collaborative partners, by students to inform their choice of university, or by prospective academic employees seeking new posts and by employers for recruitment.

Many of the underlying components of rankings, however, are under the control of the university – meaning they are not always an independent measure. Graduation rates, for example, can be improved by more effective teaching delivery (which is a desired effect of producing such rankings and assessing performance) – or by lowering standards (so-called “grade inflation”).

More generally, there is concern from senior managers of universities that some measures in league tables are vulnerable to “cheating” behaviour, and evidence that universities are, in fact, manipulating, influencing or reclassifying data in order to raise their rankings. Those using rankings are therefore in danger of being misled if universities adopt gaming behaviour to ratchet their position in the league table.

University rankings should come with a serious health warning and be handled with great care.

Want to write?

Write an article and join a growing community of more than 181,800 academics and researchers from 4,938 institutions.

Register now