The Excellence in Research for Australia Initiative (ERA) is the federal government’s latest attempt to quantify the “excellence” (or otherwise) of Australian researchers.
And just a few short weeks ago submissions closed for ERA 2012, to the great relief of university research offices around Australia.
Unlike the dreaded Higher Education Research Data Collection (HERDC) exercise – which rewards universities for pumping out as many papers as possible, even if they’re of low impact - many of the aims of the ERA process are to be welcomed. ERA combines an assessment of quality and quantity.
And that’s a good thing, because bad research isn’t worth doing (or funding).
In my experience, Australian academics used to be uniformly of the opinion their research was of world class; or that, if it wasn’t, it was rapidly getting better; or that, if it wasn’t for all their teaching and administrative duties, they’d be awesome.
And then ERA 2010 came along.
In ERA 2010 the Group of Eight (GO8) universities, which get the bulk of competitive grant funding, did pretty well. Lots of ERA 4s and 5s with a few blemishes but, on the whole, the results were reassuring for the government (and the taxpayer).
The emerging universities didn’t do so well. The odd 5 and 4, some 3s, many 2s, and even (gulp) 1s.
In case you haven’t worked it out yet, an ERA 5 rating means well above world standard; ERA 1 means the opposite.
Universities brag about high ERA scores in the same way first-year undergraduates have their Australian Tertiary Admissions Rank (ATAR) scores tattooed to their foreheads.
A change for the better
ERA 2010 measured the period 2003-2008 inclusive, and the newer research institutions might argue their staff and outputs, as of 2012, are better than from the middle of last decade. To some extent they are probably right.
The only problem is that when the next change of government occurs we can’t possibly expect the Coalition to want to keep the current system, because that would be admitting the Labor government did something right. Which, as improbable as it sounds, has many of us wondering whether perhaps this really is the end of (an) ERA?
Fortunately, ERA can be greatly improved. So apart from coming up with a different acronym, how could the Coalition change ERA for the better?
Senator Mason take note!
When the ERA 2012 assessment is completed later this year, we’ll all know what the Australian Research Council (ARC) thinks of our universities, succinctly distilled into a single number, between 1 and 5 (whole numbers only) for each research grouping.
And therein lies the problem: one whole number (or integer) for each field of research. So although there may be 100 researchers in a given discipline area at any given university, their ERA ranking will be represented by a single digit, regardless of each individual’s own score.
Furthermore, it’s not hard to imagine that a lot of time and effort has been spent by university administrators cleverly “hiding” their poorer researchers and outputs in “ballast” four-digit codes.
Some research outputs span areas, and can be submitted under different research codes. So if a university takes all of the low-quality outputs and places them into a sacrificial code, and writes off the relevant authors, it can strengthen other areas that it aims for a high score in.
Since ERA doesn’t report on the dimension of the research grouping, it becomes tempting to maximise the number of highly-ranked disciplines, even if they are tiny.
On paper, four 5s and a 1 looks a lot better than two 4s and two 2s, which might be achieved via some relabelling of some research outputs.
Of course the silliness here is that the gross output of the university doesn’t change just because you’ve managed to hide your poorly-performing researchers in a few codes you are prepared to sacrifice.
Doesn’t the government want universities to be doing more than manipulating research classifications? Hopefully, yes!
A flawed procedure
For some strange reason, as with electrons orbiting atomic nuclei, the ARC wants to force collections of researchers into “quantum states”. So although the raw scores might have left your ERA grouping at 4.49, you’ll probably get truncated back to a 4.
One more publication might have made you a 4.51 and delivered the magical 5 rating!
Ideally the ARC could publish a histogram of each individual’s own rating within a discipline, between 1.0 and 5.0, and then averages, standard deviations, medians, maybe even skewness. This would avoid quantisation errors, and allow a truer representation of research excellence from each discipline.
Extending this further, one could imagine a dot on a scatter diagram that showed the impact of each publication within a code on the y-axis and the individual researchers on the x-axis.
Then we could get a feeling of whether a group’s outputs were dominated by one individual, dragged down by a few part-timers, or of high quality but limited in number.
But the deeper you start to look at these measures, the more you realise the flaws.
In some areas impact is relatively easy to measure from citations on short timescales but, alas, not in all.
Within a discipline, some areas and activities cite extremely well, and in others, such as instrumentation, not so well. In some disciplines, such as mathematics, citations are almost meaningless.
And once we start talking about “esteem” factors – such as editorial boards, members of the academy, the relative worth of a Nobel Prize to a Fellow of the Royal Society – it all starts getting a bit arbitrary.
How many Nature papers are the equivalent of a Nobel Prize or membership of an editorial board? Do two ten-citation papers become equivalent to one 20-citation paper? Is a 5 researcher plus a 1 researcher equal to two 3 researchers?
Ultimately, the government might have to accept that, as with the momentum and position of a subatomic particle, research excellence is impossible to quantify.
And yet in 2012 a portion of the university’s income to help with the indirect costs of research, the so-called “Sustainable Research Excellence Threshold 2 funding”, used the ERA results to allocate funding. ERA 5s being seven-times the value of a 3 and 1s and 2s being worthless.
The problem for the younger institutions was that an ERA 5 at a poorly performing “average research income per EFT” university got much less funding than an ERA 5 at a uniformly excellent one.
Why? Well the total dollar amount for each ERA grouping was multiplied by the average category 1 research income per academic, so poor areas contributed nothing, and diluted the income rewards of the stronger ones.
So much for the incentive for the fledgling research universities to be rewarded for concentrating their research efforts!
This meant the Go8 universities cleaned up with this funding change, as the majority of their codes were ERA 4 and above, and their average research income much higher. If one differences the SRE 2012 and 2011 numbers, we see that the University of Queensland pocketed an extra A$5.8 million this year – the Queensland University of Technology just A$15,000.
The other, major problem with the whole ERA concept is that most university academics teach. And this, ladies and gentlemen, is the killer. Why? Because, unlike research quality, the government doesn’t seem to mind what your teaching quality is like.
University funding is independent of teaching quality.
And quality teaching takes time, and that time makes it harder to do research, especially quality research.
So the incentive is clear: if you are going to do research, heavily concentrate in one area and minimise any time those people spend on teaching.
Don’t set teaching assignments such as essays because they take time to mark. Go for multiple choice answers instead and perhaps lump most of the assessment into the end-of-term exam?
That will get your Q-index (a daily measure of an individual’s research worth used at the University of Queensland) firing on all cylinders!
It’s also worth using sessional staff or teaching-only academics wherever possible to not dilute your research effort.
Just do what you can to propel your university up the research rankings and gain as many ERA 5s as possible!
Professor Matthew Bailes is a member of the ERA-5 rated Centre for Astrophysics and Supercomputing at the Swinburne University of Technology.