University rankings can be highly influential. They can help prospective students to narrow down their choice of institution and, of course, they also give universities something to brag about.
The UK’s elite institutions, Oxford and Cambridge, continue to occupy the top two posts in the latest Times Higher Education World University Rankings And many other UK universities also appear in the annual list of “world class” universities.
Rankings are very much an indicator of a sector that has been “marketised”. As such, they should probably come with a consumer warning of their own. And although many rankings now offer some explanation of their various methods, what is less clear is whether the rankings’ different viewers carefully read and understand what the methodologies do – or what is actually being measured.
Times Higher Education editor Phil Baty claims their world rankings were developed partly in response to a UK government report that lamented the tendency of British universities to compare themselves to each other rather than to global counterparts.
Introducing international benchmarks means that British universities now need to perform even better as they enter into competition with the rest of the world – both in terms of rankings and attracting students. And in this sense, the UK is competing with both elite US institutions and rising Asian challengers.
How rankings differ
But has this increased competitiveness brought about a better experience for students at UK universities? The answer is complex. It involves several related questions: what do students care about – and what should they demand from their universities and their lecturers? In recent years, the question has also been posed as: what kind of experience are students entitled to? Different rankings answer these questions in different ways.
Students probably care about the quality of teaching first and foremost and, if UK policy discourse is a guide, how employable they will be after gaining their degree. Different ranking organisations address these two issues in various ways. The Times Higher Education World University Rankings says it uses indicators that show evidence of teaching quality but does not directly address the issue of employability – which is harder to compare across countries.
The QS World University Rankings – another well known league table – uses a survey of employers to determine which universities have the best reputation for producing skilled graduates. Another approach is that of the Academic Ranking of World Universities, also known as Shanghai Rankings, which does not claim to measure either teaching quality or employability. Instead, it focuses mainly on indicators surrounding research excellence.
With these different nuances among rankings, students might be better advised to look at the recently released Europe Teaching Rankings. The purpose of the Europe Teaching ranking was to produce a league table that would speak more directly to students and, presumably, to their teaching staff who feel neglected by the mainstream ranking tables. But then again, although developed to measure teaching quality, some of the indicators in this ranking have been criticised as measuring the wrong things.
Choosing a top scorer
My own research shows how analysts develop these different rankings to respond to, and develop different audiences. Different rankings operate according to different “businesss models”. Some develop their products for different audiences specifically to create more opportunities to sell their expertise. Some rankings are better at this than others.
The Times Higher Education rankings, for example, appear to be among the more successful organisations. From a single university ranking, they now produce rankings of universities in specific world regions – such as the Asia Rankings, Young University Rankings and are even now developing rankings that address the themes of innovation and social responsibility.
It’s also worth noting that the results for British universities in the Times Higher Education’s Europe Teaching Rankings vary from those of UK-only rankings – such as those produced by UK newspapers such as The Guardian. The Guardian’s rankings and the other national ones are targeted at students doing their A-levels (as well as their parents) who are starting to think about university options. So quite often teaching-oriented UK universities will do better in these national rankings.
These UK national rankings consider the results of the National Student Survey which aims to measure student satisfaction and is broadly comparable across the UK. These rankings also show the relative performance of different academic departments.
So, for a British student seeking to study a particular degree in the UK, these rankings might be a better starting point than global rankings. Students from overseas would do well to examine both national as well as global rankings to get a better picture of what is on offer in the UK. The same applies to other countries.
Given the large number of organisations producing university rankings and league tables, the issue of whether rankings are changing universities into ever more fierce competitors with each other is of course a consideration. And while rankings, such as those released this week, can indeed keep universities on their toes, it’s easy to wonder if a more cooperative rather than competitive sector would be better for both for universities and the students they teach.