Menu Close

The Price of Everything

A peer funding model for the arts?

A recent paper by five mathematical computer scientists at Indiana University (published in EMBO Reports, a forum for short papers in molecular biology) proposes a clever new model for science funding that makes use of collective allocation (peer-funding) rather than expert-panel-and-peer-review funding mechanisms. I want to consider whether this might also work for arts and cultural funding.

Public science and research funding in Australia, as in most of the world, is based on a process that has remained largely unchanged for 60 years. This begins with calls for submissions of reasonably detailed project proposals. These then pass through expert panels (e.g. the Australian Research Council) and then on to the peer review process in which carefully selected “peers” evaluate the proposals and write detailed reports, before passing these back to the panels for final judgement. The high-level of process and accountability makes this the gold standard for taxpayer-sourced public funding of research (philanthropic trust funding often mirrors this architecture).

But it is expensive to run, and onerous to all involved. Perhaps one in ten projects proposed will be funded. The amounts of time and effort invested by all those seeking funding will tend toward the expected value of the grants, meaning that once overhead costs to panels and reviewers are added in, these function to a considerable degree as a redistribution mechanism. Rob Brooks wrote about this on The Conversation last year.

The new model the computer scientists propose bypasses this expert-panel-and-peer-review system altogether by simply taking the whole public lump of funding, and allocating it unconditionally (yes, unconditionally) to all “eligible” scientific researchers. It would thus function like a kind of “basic income”.

They calculate that if the National Science Foundation budget in the US were divided among all who applied for funding, it would deliver about US$100,000 per scientist. The problem with this, apart from an expected blowout in the number of people who claim to be scientists, is that we’ve just lost oversight, accountability and peer review.

So here’s what the computer scientists propose: everyone who receives funding gives some fraction (say 50% of their previous year’s funding) to other scientists whose work they like or think particularly interesting and valuable. That fraction can be distributed among one or many. The idea is that this works as a collective-allocation mechanism that basically crowd-sources peer review, and with the added advantage that it funds people, not projects. It also gets the incentives right for scientists to concentrate on clear communication of their findings and the value of research.

This method replicates the good parts of the previous model: those with higher peer regard will receive more funding; and those same people will have a larger say in the overall allocation (the pledge is a fixed fraction of the previous year’s funding). There would, obviously, still need to be confidentiality and conflict-of-interest avoiding mechanisms, along with careful monitoring to ensure that circular funding schemes are identified and punished.

But it also avoids the bad parts: in providing a guaranteed basic income, it liberates researchers from continual wasteful cycles of grant-writing by furnishing autonomy and stability of funding; it avoids the overheads associated with process and review; it enables a continual updating of funding to reflect the preferences and priorities of the scientific community, without getting caught in legacy priorities or political cycles.

Now might this also work for public funding of arts and culture? The main reason to think it might is that the same inefficiency arguments apply in arts and culture as they do in science: namely that those seeking grants spend considerable time and effort writing and preparing grants; face high uncertainty about funding outcomes; proposals tend toward conservative trend-following of agency preferences; projects, not people, are funded; and all the while arts funding bodies and panels (and the peer review process) consume sizable overhead.

On the flip-side, it’s not as neatly obvious who would be eligible. Research scientists can be reliably identified by the high-hurdle of having PhDs, prior publications, and full-time appointments at accredited institutes. But let’s suppose we can come up with an acceptable solution to that long-list problem. (I’m not suggesting this is trivial; just that that’s not what I want to focus on here.)

I think that this would, potentially, be a substantial step towards a more open and effective funding model (peer driven, not bureaucratically or politically driven). It would enable creative resources to be more directly spent on artistic production and public communication, with less time and effort wasted on endless rounds of grant-writing and reviewing.

And while still some distance from a decentralised and fully-incentivised market ideal of “consumers voting with their own dollars”, it is at least closer to that model in reflecting the preferences and judgements of the actual community of practising producers of culture (which is not always identical to appointed “expert” panels). Like the Oscars, in a way.

Might collective allocation of arts and cultural funding be superior to expert-panel based solutions? What do we think: crazy or not?

Want to write?

Write an article and join a growing community of more than 182,400 academics and researchers from 4,942 institutions.

Register now