Open access not as simple as it sounds: outgoing ARC boss

Some proponents of open access publishing are naive, Margaret Sheil says. Macquarie University

After almost five years at the helm of the Australian Research Council (ARC), Margaret Sheil will this week step down to take up the position of Provost at the University of Melbourne.

Professor Sheil has enjoyed a long career in academia, including a 17-year stint at the University of Wollongong, where she started as a chemistry lecturer and rose to become that university’s first deputy vice-chancellor research.

As CEO of Australia’s biggest research funding body, Professor Sheil was in charge of a budget of $580 million on her arrival in 2007. The council’s grant schemes are this year worth $810 million.

Here she explains why the ARC has no plans to make academics publish taxpayer-funded scholarly research in places where anyone can access it for free, and talks about the difficulty of measuring the broad social and economic impact of academic work.

The CEO of the National Health and Medical Research Council (NHMRC), Warwick Anderson, recently announced that all research funded by the NHMRC must be available for free within 12 months of publication. At present, ARC merely “encourages” researchers who receive grants from taxpayer money to publish their work in journals or databases that do not charge for access. Advocates of open access have been critical of that policy, which they say is too weak. Would the ARC consider going down the same route as the NHMRC?

We’re quite comfortable with our current position and we don’t have any plans to change that at the moment, because we serve as a much broader, much more complex research community than the NHMRC. We would not want to move to a position of mandating [open access] until we understood the full range of those complexities: whether [academics] are in a position to comply, whether they can afford to comply.

There are a whole range of cost issues in relation to open access, so we feel that the position that we’ve taken, which is to strongly encourage and [make academics] explain why not, and also the provisions that we’ve put in place to allow for up to 2% of each grant awarded to be used towards publication costs, is a reasonable and considered position.

Proponents of open access say that the very nature of taxpayer-funded activities means they ought to be transparent and visible to all. But they make the point that it’s not just about making research open and available to everyone; it’s also about making it accessible to other researchers, which is a critical part of sharing in and building on scholarly knowledge. Does it concern you that the ARC is not doing more to aid that process?

No it doesn’t worry me at all, because I think you’ll find that if anything, information has never been more accessible to the research community than it is now. So I just don’t believe that argument that the research community can’t get access to research.

The other issue is that it’s not always appropriate to make research public. Making something publicly available doesn’t necessarily make it accessible. And so there are many, many examples of where protecting intellectual property actually makes it more readily available, because then someone is prepared to commercialise it and make it accessible.

So this is a very complex space. It’s much more complex than many of the open access advocates understand. The naive position is, yes, it’s taxpayer-funded research, we should make it publicly available. But that doesn’t necessarily make it accessible.

You mentioned earlier that there are cost issues related to open access - that the fees required to publish with open access journals could act as a barrier to early-career academics. Is there research or data that backs this up?

Someone has to pay for this. There are two issues: there’s the paying for the research, and then there’s the paying for making the research accessible. And so, either that’s paid for through the traditional model of subscriptions to journals and the publishers bear the cost of making the research accessible, or it’s paid for by the researchers themselves. And so, yes, there’s lots of evidence. The basic principle is that someone has to pay.

Some of the society journals will make allowances for the amount of funding that you have available, but others don’t. And so there is a cost disincentive to publishing in certain [open access] journals, if the cost is borne by the person who wants to get published. But ultimately open access isn’t free. Someone has to pay for it somewhere.

The sort of naive argument that, you know, well this has been funded by the public and so it should be made public, misses out the cost of actually making that information public. That’s where there’s a potential disadvantage to researchers who don’t have the resources to do that.

Very soon you’ll be taking up a new position as Provost at the University of Melbourne. As a Group of Eight university, Melbourne is in talks with the Australian Technology Network of Universities about a new way to measure the quality of research that takes into account its broader impact on society - beyond peer-reviewed journals and academia. Do you agree that universities should be more mindful of how their research affects the wider community, and is it realistic that they could measure that in an empirical way?

There’s two issues here. One is whether we should encourage researchers to articulate the benefit and impact of their research, and there’s no question in my mind that we should continue to do that.

The trial that’s going on will probably collect some pretty useful case studies. There’s no argument about that. I think everyone is of the general view that it’s important that the broader impacts of research be articulated and promulgated.

The question of whether you can actually measure a particular impact in a way that could then be turned into a funding allocation is much, much more difficult. Because it’s very difficult to isolate a particular piece of research at a particular time at a particular institution, and link that directly to some sort of impact that you can measure in a verifiable way. That’s much more difficult.

The trial may shed some light on it, but the fundamental issue is, the reason why Excellence in Research for Australia (ERA) was broadly accepted is because it’s based on a very solid set of metrics and research indicators that people accepted.

But once you start going into the broader impacts of research, no one around the world is doing this in any sort of quantifiable way. What they’re doing in the UK is just a variation of what was proposed here, which is essentially assessing case studies. Those case studies are very important, but it’s unlikely that they’ll have the capacity to be tied back to funding in a rigorous way.

If those case studies are not considered in a quantifiable way as part of funding allocations, how might they be assessed?

Well, I think as part of the communication process, when talking about the value of research and explaining its benefits, that’s how we use them. There’s a whole range of ways we use that kind of principle already. But one of the more important ones is in communication.

You referred to the ERA initiative that you oversaw at the Council - a system designed to measure the quality of research by universities. The ERA replaced the Research Quality Framework. Was that the most important achievement in your time there?

There are a number of things that we’ve done that I’m particularly pleased about, but obviously ERA was a massive exercise that was hugely contested. But we managed to deliver that in a way that was broadly accepted. And we delivered it on time and on budget, with an enormous amount of buy-in from various people.

We’ve had a range of reforms in relation to the way we deal with research records, and how that impacts on early-career researchers and on women and also a number of reforms in relation to indigenous researcher schemes that I’m particularly proud of.

Among things that are less obvious to the public at large, we’ve completely redone the way that the ARC does business, so we insourced our IT, we’ve got a really robust research management system, we’ve got a much more robust peer-review system. We’ve halved the length of all our funding rules. We’ve simplified a whole range of processes. So there’s a lot of really solid work that’s been done just in terms of the way we do business.

When you look at the National Research Priorities, they appear to be characterised by a focus on science, health and technical disciplines. And yet a very big percentage of students and academics are in humanities and social sciences. Is there a case to be made that the arts are overlooked by funding bodies and the research sector at large?

The National Research Priorities are under review, and they provide a useful framework to describe what we do, but they don’t necessarily drive funding in a very direct way.

I think a broader issue for the humanities and social sciences - and there’s enough evidence from our schemes, I should point out, that the humanities and social sciences have continued to perform strongly, and in some areas have increased in the last four or five years - the issue is articulating the value of the humanities and the arts in their own right, as opposed to trying to describe them in a way that makes them seem more practical.

So the question I get asked quite often is, ‘Why don’t you fund this particular type of research?’ and I say, ‘Well what do you say to your kids when they come home and say, 'I don’t want to study Shakespeare’?‘ Essentially you know that studying Shakespeare is a good thing, and so I think it’s important - and I make this point a lot to groups in this area - to see that there’s a value in critical thinking and there’s a value in cultural understanding, and it’s important that we continue to articulate that benefit in its own right, not just because somewhere down the track, it might have a medical or a social application.