Menu Close
There is no good reason the views of students should be disregarded in what defines quality higher education. Shutterstock

What makes a good university? Academics and students have different ideas

New analysis of the Federal Government’s Quality Indicators of Learning and Teaching (QILT) and the Times Higher Education (THE) reveals a divide between what academics think defines quality and what students actually experience. This is demonstrated by world university rankings and student satisfaction surveys.

Quality teaching is an essential component of the QILT program. Generally, satisfaction levels among Australian university students are high at around 80%. However, there is often a significant gap between what students consider a quality teaching experience and what academics consider quality teaching.

World university rankings continue to attract attention, with almost all Australian universities now participating. Millions of potential students from around the world use ranking information to make choices about their future studies. Rankings guide policy, investment, jobs and partnerships across the globe.

The THE university ranking, operated by TES Global, has a media reach of almost 700 million. It ranks 2,150 institutions worldwide, 35 in Australia. Overall, Australia performs well in the rankings.

Student perspectives absent from reputation surveys

Half of the Times’ measure for quality teaching is based on an academic reputation survey. The reputation survey claims to be the largest of its kind in the world with more than 10,320 respondents, and underpins ranking results.

The academic reputation survey asks respondents which universities are “the best”, based on their experience in a particular field. QILT on the other hand, asks a series of targeted questions about the quality of teaching. These include overall experience, learning engagement, explanations on assessment, course structure and focus.

The ranking is a major source of information for students, but the Times are entirely devoid of student experience data. Despite this, universities are drawn to using the ranking to promote a quality education experience, based on information that provides only part of this assurance, and none from a student experience perspective.

Analysis indicates the academic reputation survey conducted by the Times has almost no bearing on actual experience recalled by students. In many cases, there is a significant divide between what academic peers and students consider to be quality teaching.



Size and ranking correlate, but ranking and satisfaction do not

Interestingly, universities whose academic peers are critical of their institution actually tend to rate better with students.

The typical profile of an Australian university with very optimistic academic peers is: 106 years old, has over 55,000 students, performs 71% better in quality teaching according to the rankings – but is below or average according to students. Examples are Monash University, The University of Melbourne, the University of New South Wales, the University of Queensland and Sydney University.

On the other hand, Australian universities with very pessimistic academic peers are on average: 29 years old, with about 27,000 students, perform - 36% worse in quality teaching according to the rankings, but up to 13% better according to students. Examples include Australian Catholic University, Bond University, Edith Cowan, Murdoch University, University of Sunshine Coast and Western Sydney University.


Author provided/Times Higher Education rankings, CC BY-ND

In Australia, there is a relationship between size and ranking. Highly ranked institutions are usually big. The expansion of the higher education sector coincides with Australia’s comprehensive ranking results on the world stage. The catch is that when it comes to student satisfaction, being big doesn’t seem to help.


Author provided/Times Higher Education rankings, CC BY-ND

Elation & peril

There is also a circular logic to quality teaching in the rankings. High rankings build reputation, while reputation is required to achieve a high ranking. So it makes sense that the rankings are generally stable at the top and volatile in the middle, with a long tail.

The Times does attempt to gather data on student experience, but only 1,000 students are included and all are based in the UK. In Australia, the Commonwealth supported QILT program administers a survey to more than 123,000 students each year and is overwhelmingly more comprehensive.

While rankings can be a source for elation, a fall in the rankings is the dread of almost all universities. However, rankings are a mainstay of the global higher education landscape and are useful. Policy makers, like students, should consult the rankings with careful deliberation. If expectations are high among students, those expectations should be respected.

All this confirms that lower-ranked universities are not simply an assembly of “easy-to-please” students. Rather, the ranking of quality teaching using reputation indicators is incomplete.

Notions of quality are a subjective matter with many facets and competing perspectives. That said, the views of students should be respected not ignored in what defines quality higher education.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now