There was much celebrating around Australia’s university campuses when the Minister for Innovation, Industry, Science and Research, Senator Kim Carr announced changes to the “Excellence in Research for Australia” (ERA) scheme.
A key decision was the removal of the prescriptive ranking system of A*, A, B and C level journals. In its place is to be a more complex measure of journal quality, with greater recognition of multi-disciplinary research.
In his media statement, Carr noted: “there is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings. One common example was the setting of targets for publication in A and A* journals by institutional research managers”.
The change should come as no surprise. Previous Federal Government attempts to measure research quality were also problematic.
That scheme attempted to measure quantity, quality and impact and as with the ERA, it cost millions of dollars and thousands of hours of time as Australia’s universities scrambled to get the best score.
Major problems with the ERA were its narrow focus on journal rankings as a measure of academic performance and a lack of transparency about how some journals were ranked.
Science and Technology Australia (formerly the Federation of Australian Scientific and Technological Societies) warned in 2010 that the ERA suffered from inconsistencies in the way journals were ranked across disciplines.
While some fields regarded multidisciplinary journals highly, others did not. Some academic fields had more A and A* journals than others, while new and emerging fields seemed disadvantaged.
Multidisciplinary and interdisciplinary research is now recognised as the frontier for innovation around the world. However, the ERA worked against this trend and disadvantaged scholars publishing outside their immediate discipline areas.
The Fields of Research codes used by the ERA were often a poor reflection of researchers’ locations within universities, with many spanning disciplinary fields.
While the science disciplines relied on impact factors when assessing journals, social sciences and the humanities did not, due to the poor coverage of non-science journals in benchmark systems, such as the Institute of Scientific Investigation.
Arguably a more complete coverage of academic citations can be found in Google Scholar. The ERA also gave little weight to Australian based journals and ignored research books.
This disadvantaged fields such as Australian history, literature and industrial relations, which led the Deans of Arts, Social Sciences and Humanities to question the merits of the ERA as far back as 2009.
There has been a quest for international benchmarks in universities for many years. This reflects the globalisation of the sector over the past thirty years.
It is also a response by governments to public pressure to see what return is being obtained from public funding of university research. As a result the sector has become obsessed with league tables, often to the detriment of appropriate activity.
An unfortunate outcome of the ERA was the way it was presented within the media and interpreted by the sector.
At an institutional level, the ERA suggested 70% of Australia’s universities were performing below world’s best practice in research.
This did little to assist Australia’s embattled international education sector, which is our third most important export industry, although times have been hard recently.
It would be interesting to see the reaction if the Government suggested 70% of Australia’s tourist destinations were not up to standard.
In a review of the British University Quality Assessment system in the late 1980s, one official report noted:
“No one has yet devised even a single indicator of performance measurement, which commands wide support amongst the academic community… and those using performance measures, whether they refer to teaching or research activities, should use them with great caution and considerable humility”.
Little has changed since. Britain’s Research Assessment Exercise remains as controversial as the ERA.
The modern academic must be a good researcher and teacher, but is also expected to engage with the community.
Like the universities in which they work, they serve multiple stakeholders. A paper published in an A* journal may reflect good scientific method or leading-edge theory, but academic impact is not only measured by peer citations, as a number of recent Australians of the Year can attest.
If the ERA or its successor is to have a future it must provide a more balanced measure of research performance. A “one jacket fits all” approach does not work, nor does a narrow focus on ranked journals.
Greater transparency and recognition of interdisciplinary and multidisciplinary work is needed. Finally, it is clear some will seek to “game the system”.
Consequently, it is not sensible to introduce institutional-level performance measures that do not align with individual-level performance measures and this inconsistency needs to be addressed before the next ERA round.