Menu Close

Universities should change the way they measure success

How do universities measure their success? How should they? Ben Beiske\Flickr, CC BY-SA

Universities have a vast array of measures to gauge how successful they are. Most of the measures have a lot to do with prestige and not much to do with the outcomes of their graduates or the quality of the education their students receive.

Cut-off scores

Probably the most pervasive measure of universities’ success in Australia and in many other countries is their selectivity of undergraduate student entry, measured in Australia by cut-off scores. A program’s cut-off score is the lowest score needed to gain entry to the program.

Cut-off scores are not a good indicator of the quality or even of the prior attainment of entering students because most students have a score above the cut-off. Often programs with a cut-off score around 60, for example, admit most students with a score above 80. A far better measure of the attainment of students entering a program, inasmuch as it is a good measure of anything, is the median or middle entry score of students.

Nonetheless, cut-off scores are widely viewed as an indicator of a program and university’s quality by students, parents, schools, governments, members of the public and even by university staff who should know better. A good example is NSW Education Minister Adrian Piccoli and the Group of Eight elite universities who sought to require minimum entry scores for admission to teacher education programs or to all higher education bachelor programs.

Student surveys

Australia, with many other countries, relies heavily on student surveys to evaluate universities’ performance. The course experience questionnaire is administered to graduates some four months after they complete their program.

It has been rigorously evaluated and several studies have refuted the common criticisms that it favours shallow teachers or soft markers. There is now over 20 years of course experience questionnaire results, providing a valuable sequence of data on university teaching.

Students complete satisfaction surveys. Flickr/Ed Yourdon, CC BY

Administered with the course experience questionnaire is the graduate destination survey. This reports graduates’ employment, study or other activity four months after graduation.

The beyond graduation survey reports on graduates’ activities, outcomes and experiences three years after graduation.

The Australian government also funds a university experience survey. This surveys current students during second semester.

Statistics

The Australian government reports numerous student statistics to assess universities’ performance on the admission and performance of students from equity groups and on students’ attrition, success and retention.

The Commonwealth also reports data on universities’ staff, financial results and performance.

My University web site

The My University website allows users to compare university programs. AAP

Much of this data from student surveys and student and staff statistics are reported for each university program on the government’s My University website.

While the website is reasonably comprehensive and has a function allowing users to compare university programs, there is no evidence that it informs prospective students’ choice of programs extensively.

The site is therefore being replaced by quality indicators for learning and teaching. These will include a new employer satisfaction survey if grave methodological difficulties can be overcome.

Excellence in research for Australia

The excellence in research for Australia (ERA) assessments are strong evaluations of each university’s research performance in each of 157 detailed fields aggregated into 22 broad fields and eight discipline clusters.

The 2015 assessments are currently being conducted. While the ERA assessments are reasonably rigorous and accordingly strongly influence university research policy, they need some time and expertise to interpret informatively.

World rankings

Well-funded universities’ reputations and their demand from international students are influenced strongly by world ranks. There are dozens of world ranks of very variable rigour, coverage and subject matter. There is, for example, a campus squirrel listing, which rates some USA and Canadian campuses by the size, health and behaviour of their squirrels.

Of the prominent world ranks, the most rigorous is Shanghai Jiao Tong University’s academic ranking of world universities. The rank’s relative rigour is achieved at the expense of a narrow focus on research, particularly empirical research.

Which uni ranks best? Jason James/Flickr, CC BY

The other two prominent ranks rely on reputation surveys. The rankers publish little information about their surveys, but from what information is available it is clear that they fail basic undergraduate social sciences methods.

Half of QS Quacquarelli Symonds’ rank is based on reputation surveys, so it is particularly vulnerable to the bias introduced and amplified by reputation surveys.

The Times Higher Education (THE) rank is too volatile, which is the result of poor methods. Times Higher Education changed its method and data supplier again last year, so changes from year to year are largely uninformative. Unless THE improves on its previous practice, its 2015 ranks shouldn’t be compared with previous years.

Evidence of university standards

Much as universities rail against the current league tables and indicators of quality, they boast about their results in shoddy league tables and invest very little in developing alternatives. Universities should provide publicly verifiable evidence of the standards of their graduates. Instead there are numerous indications that standards vary markedly between years, qualifications and institutions.

One possible measure of the standards of university graduates is the OECD’s assessment of higher education learning outcomes (Ahelo), which trialed exams for graduates in economics and engineering. The trial involved 23,000 students from 248 higher education institutions in 17 countries, including Australia.

The results from the Ahelo trial were mixed and the project is controversial. That might be expected from a project that was initially presented as higher education’s version of the international school-aged test PISA (Program for International Student Assessment).

Nonetheless, Ahelo still has potential and offers the best prospect of improving on current methods for measuring universities’ success. It would be the only internationally comparative measure of teaching or learning, and it would assess graduates’ attainment, not just report their experiences or satisfaction.

Trust us - we are the experts

Australia is probably more advanced than most in measuring its universities’ success. However, it suffers with many other countries in relying excessively on numerical measures and their associated ranks.

There is a tension between peoples’ understandable interest in a simple indicator of quality and even the most rigorous indicator’s distortion by “Campbell’s law”, which means the more an indicator becomes a target, the more it measures targeting rather than performance and the more it distorts performance.

In the absence of an array of better indicators, particularly of teaching and learning, it would be better to rely less on simple indicators and more on expert judgement.


The Conversation is running a series on “What are universities for?” looking at the place of universities in Australia, why they exist, who they serve, and how this is changing over time. Read other articles in the series here.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now