Sections

Services

Information

UK United Kingdom

The end of written grant applications: let’s use a formula

The winners and losers of the 2013 National Health and Medical Research (NHMRC) Project Grants were announced in October. A record low success rate of just 16.9% (down from last year’s 20.5%) meant the…

Hours spent writing grant applications could be spent actually doing research with a grant-determining formula. Λ |_ ν-\ Γ Ø

The winners and losers of the 2013 National Health and Medical Research (NHMRC) Project Grants were announced in October. A record low success rate of just 16.9% (down from last year’s 20.5%) meant the champagne stayed in the fridge for most.

Project grants are the major source of funding for new ideas in health and medical research. Many scientists rely upon them for their job or the jobs of their staff.

Failure is a major blow which is made harder by being given almost no feedback. Applicants are told their scores, but are given no idea where they went wrong.

Australian scientists are becoming frustrated by the system, with a survey revealing 95% of scientists agree that changes to the application process are needed, 90% agree that changes to the peer review system are needed and 99% agree that they would like more feedback.

Now for something completely different

Problems with funding peer review are nothing new. An article in Science back in 1981 discussed the “wastefulness of a system” where scientists spend too much time writing applications at the expense of doing actual research.

The suggested solution was that written applications be scrapped and that research dollars could be allocated to departments using the formula:

Dollars per department =
A × (number of Masters degrees + 3 × number of PhD degrees) +
B × (number of published papers) +
C × (dollar support from other agencies) +
D × (dollar support from industry).

A, B, C and D are multipliers that control the importance of each research output.

The issue is there would be a huge row about what these multipliers should be. Departments with lots of students would argue for “A” to be large, whereas departments that worked closely with industry would argue for “D” to be large. The formula would be good for some, but bad for others who would resist its introduction.

Nam2@7676

A formula that mimics current funding

It might be possible to keep everyone happy if a formula could be designed that mimicked current funding. The formula would be trained to predict past winners based on historical data for research performance. Its accuracy would then be prospectively tested by its ability to predict the winners in a future round.

This big data problem is ideally suited to a Kaggle competition in which multiple formulae would compete to give the closest match to the current system. Selecting the closest matching formula would mean that the historic knowledge of the funding system would be preserved in formula form.

The formula would be stratified according to research field, because the definition of research quality varies greatly between academic fields. The formula would also be stratified according to experience to ensure that early career scientists were not disadvantaged.

A formula has many benefits

The biggest advantage of allocating funding using a formula would be the enormous amounts of scientists’ time that would no longer be wasted on lengthy funding applications. It would also save peer reviewers time, and cut funding agencies' administrative costs.

A formula would remove the subjective human element from funding, which would remove the randomness in funding decisions. A formula is also blind to gender, age and geography, and it solves the conflict of interest problem, which is especially relevant in Australia’s small research community.

dalydose

It would be a transparent process and the monthly list of winners could be published online. Research money could be distributed at any time, with payments on a monthly basis rather than the current boom and bust which inhibits career development (promising scientists have quit because of their fragile career tenure).

A formula would starve uncompetitive scientists (who would hopefully seek other activities) and nurture competitive scientists to thrive.

Objections to a formula

There may be concerns about scientists gaming the formula, but any formula is likely to be based on outcomes such as getting high quality publications that are highly cited, so it should reward good research behaviour.

Gaming is present in current funding systems, including submitting the same application to multiple agencies and submitting applications where the work has mostly been completed.

A formula may be objected to on the grounds that it would stifle innovation. But many current funding schemes reward conformity rather than risky research. A written application ties a scientist to their plans for three or more years. Funding high performing scientists without requiring a specific research plan should encourage more innovation.

Moving with the times

Funding agencies were essential from the 1970s to 1990s when the need to distribute research funding became great, but the information required to decide on who to fund was not easily available. They played a valuable agency role between governments and scientists.

Today the information needed to make funding decisions is likely freely available on the internet. Funding agencies have become bureaucratic, and have externalised large and avoidable costs onto the research community they are supposed to serve. A formula would be a radical change to funding, but a welcome one.

Join the conversation

25 Comments sorted by

  1. Gavin Moodie
    Gavin Moodie is a Friend of The Conversation.

    Adjunct professor at RMIT University

    I don't understand the proposal. The 1981 example was of research funds allocated by formula to departments, but the subsequent discussion implies that funds would be allocated by formula to individuals. If the formula were to apply to individuals what would be the elements of the formula and how would the data be sourced? How would groups be funded - would individuals have to volunteer to pool (some of) their individual grants?

    report
    1. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Gavin Moodie

      The formula for individuals would be created by using recent data on who won funding and their track records when they applied. We don't yet know what elements the formula would have. A likely element would be the number of publications. But perhaps the number of publications in the last five years in journals with an impact factor above 10 would be a better predictor? Other parts might be the number of citations and memberships of editorial boards. All this data is freely available on the web. Using freely available data means scientists wouldn't need to enter all this information again into RGMS or RMS.

      report
    2. Gavin Moodie
      Gavin Moodie is a Friend of The Conversation.

      Adjunct professor at RMIT University

      In reply to Adrian Barnett

      Publications and citations data are indeed freely available on the web, but anyone who has actually used them knows they need considerable cleaning to be used with any precision. Grant data are published on the web but not always in a readily harvestable form, so there would have to more invested in collecting, cleaning and matching that data.

      Even assuming all those practical difficulties were readily overcome, what is the correlation between these data and grant outcomes? The best predictor of future grants may be (publications x 1) + (citations x 3) + (grants x 10) but if that explains only 30% of the variance in grant amounts it is not much of an advance.

      It would be best to collect some data and try some formulas for this proposal to be taken seriously.

      report
    3. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Gavin Moodie

      We are planning to give this a go. We are currently looking for sponsorship money for a Kaggle prize. I'd hope that someone from the Kaggle world could get predictions that were within 25% of the real allocation. If we can get that close based on automatic (sometime dirty) data, then imagine how much better it would be using clean data. Also, the collection, cleaning and matching of the data is all part of the Kaggle competition.

      My Google Scholar account is remarkably accurate (although this is a sample size of 1).

      report
  2. Paul Prociv

    ex medical academic; botanical engineer

    As the pie grows smaller, while the number of mouths needing to be fed increases, is there any possible way of keeping everybody happy?

    report
    1. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Paul Prociv

      No funding system will please every individual all the time, but a good system will please more people more of the time.

      Taking the subjective human element out of who wins funding may reduce some of the current bad feeling.

      report
  3. Adam Dunn

    Senior Research Fellow at UNSW Australia

    It's an interesting but flawed idea, and perhaps that's why it didn't catch on?

    1. Only including journal articles that were published in journals with an impact factor above 10 (comment below)?

    Firstly, there are entire disciplines without any journals that come close to that impact factor. Other disciplines rely much more heavily on conferences, monographs, etc. Secondly, the impact factor of journals is known to be an especially poor indicator of the quality of individual articles (e.g…

    Read more
    1. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Adam Dunn

      We can't call it flawed until it's been given a go. Nobody has had the nerve to try it, but all the data are available. All we need is to tempt some bright statistical minds to have a go.

      The impact factor above 10 was just an example. Any formula would be stratified by field (e.g., public health, clinical sciences). So whatever mark of quality was relevant for that field would likely be selected by the formula.

      The formula would also be stratified by experience, so it would still support more junior researchers. Any formula that didn't would not be a good match to the current system and so would not be a winning formula.

      report
    2. Gavin Moodie
      Gavin Moodie is a Friend of The Conversation.

      Adjunct professor at RMIT University

      In reply to Adrian Barnett

      As Adam noted, there are several studies which have found very low correlations between publications and research quality and citations and research quality. There is no need for another study to find this. So the proposal is methodologically flawed.

      Again as Adam noted, the aim of research grants is to fund potential, not track record. So any proposal to fund track record is arguably conceptually flawed. No fiddling with formulas would fix this conceptual flaw.

      report
    3. Adam Dunn

      Senior Research Fellow at UNSW Australia

      In reply to Adrian Barnett

      Just for clarification, I do analysis of citations and data mining on a daily basis and I run a grant looking at bias in the translation of clinical evidence. I'm also involved in the broader community that discusses the impact measurable in the academic community as well as more widely in society (which is not counted in the formula at all). These data on citations are not clean nor uniformly available (e.g. you can't mine Google Scholar, and librarians will routinely tell you it's a terrible indicator of citations).

      I'm all for giving it a go to see if you can identify a model that predicts funding based on those criteria and calculating the variance (and perhaps better capturing and understanding the prevailing biases in the system), but besides all that, the fundamental flaw in the idea is: how do you measure a good idea?

      Are good ideas only manufactured by people who have already been funded for a good idea before? Who let them in the door in the first place?

      report
    4. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Gavin Moodie

      The aim of research grants may be to fund potential, but the practice is quite different. There's plenty of evidence that safe grants are funded whilst risky ideas are not, see for example: www.nature.com/nature/journal/v492/n7427/full/492034a.html‎.

      A PhD in this area by Karen Mow found that track record was the dominant factor in deciding project funding for panel members at both the ARC and NHMRC: http://www.canberra.edu.au/researchrepository/file/2a04fa15-5591-0a4d-ebe1-f49467310292/1/introductory_pages.pdf. So there's strong evidence that track record may be a very good predictor of project success.

      We also know that some researchers apply for funding for work that has already been completed (the complete reverse of funding new ideas). So the really serious flaws are in our current funding systems.

      report
    5. Gavin Moodie
      Gavin Moodie is a Friend of The Conversation.

      Adjunct professor at RMIT University

      In reply to Adrian Barnett

      I agree that track record predicts grant success, hence my weight of 10 in the hypothetical formula. But if the current research grant system is seriously flawed the aim should be to improve the system, not replicate it.

      report
    6. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Adam Dunn

      Google Scholar was just an example. Any winning formula would be need to find the publicly available data that had enough predictive power to get close to the real system.

      Is the current system good at rewarding good ideas? There's strong evidence that it rewards safe ideas rather than good ideas. If we freed scientists from putting in applications based on their safest ideas then they'd have more license to pursue their riskier (more exciting) ideas.

      Getting people in the door will always be an issue, and it's not one that can be solved by major funding schemes. Young scientists need to build their track record by other means.

      report
    7. Adam Dunn

      Senior Research Fellow at UNSW Australia

      In reply to Adrian Barnett

      These are excellent reasons to find a model of where funding goes, and to measure the difference between what the guidelines say and what the practice actually is.

      I still wouldn't suggest replacing a system of implicit biases with a system of explicit biases. That doesn't sound like a good way to improve the system, just a way to maintain (or even increase) the biases.

      Having been funded as a sole CI with a shaky track record and an idea judged as innovative, I certainly hope we never, ever move into a funding system where researchers can just keep doing the same crappy thing over and over as long as they had the connections to get their foot in the door in the first place. It's the exact opposite of science and progression.

      Ultimately, it is very important to be able to identify and measure the biases that we think are there and address them. Not to accept them, especially when the consequences are so dire.

      report
    8. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Adam Dunn

      I think it would be wonderful to have the explicit biases of the system captured in a formula. We would then have total control over the biases and could potentially iron them out.

      For example, does the current system properly capture time taken out to have a baby? If funding were allocated using a formula then we could add a "had a baby" term to the model. Again this could be based on data by examining the track records of women with and without babies.

      I don't think a formula is anti-science. If it's done well it would be far more progressive and transparent than the current system. Young researchers now need to be trained in the many unwritten rules there are about getting grant funding. That not a productive or scientific use of their time.

      report
    9. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Gavin Moodie

      The biggest thing we could do to improve the current system is cut the amount of scientific hours that go into it (550 years of scientists' time went into the last Project Grant round). This is the biggest and most important problem in my opinion. A formula would instantly return all this mostly wasted time to scientific research.

      report
    10. Adam Dunn

      Senior Research Fellow at UNSW Australia

      In reply to Adrian Barnett

      These are excellent points. I think my concern was that the model was being proposed in a way that would replicate the existing biases rather than to identify and fix them.

      And of course I agree that at least some of the time spent on grant-writing could be better spent.

      report
    11. Gavin Moodie
      Gavin Moodie is a Friend of The Conversation.

      Adjunct professor at RMIT University

      In reply to Adrian Barnett

      'Had a baby' is not easily collectable from the web. Even if it is published on the web - I'm not sure that it is - one would have to match that with potential grant recipients. Even if that could be fixed, how long should the effect be examined - until the baby is weaned, at school, no longer needs close supervision, no longer dependent?

      report
    12. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Gavin Moodie

      "had a baby" would need to come from researchers, not the web. A formula system would still need researchers to fill out a form to get funding, but it would be around one page, not the 80-120 pages of the current system.

      report
    13. Adam Dunn

      Senior Research Fellow at UNSW Australia

      In reply to Adrian Barnett

      I commented on exactly the same thing back on 1 Nov 2011 when I roughly calculated that it was >700 researcher years in Australia across the ARC for grants that were unsuccessful. Good to know I probably wasn't far off!

      report
  4. Conor King
    Conor King is a Friend of The Conversation.

    Executive Director, IRU at La Trobe University

    coming in late to this piece. the fromula looks very similar to that used for the research training scheme and the joint research engagement scheme, only they allocate at the institutional level.

    At the individual level if you wanted to test it you would need to look for whether a formula matched for newer researchers as wel as it does for older.

    The other approach to the issue of too much time etc on proposals is to strip back to the essentials. If all proposals that meet initials scrutiny, and hence focus on the writer setting out the proposal not finessing endlessly, are entered into ane lectroinc hat and sufficient proposals selected until the funds run out - would it be notably worse for which were chosen?

    13 years ago NHMRC funding was half what it is now in value yet we still have the same level of over subscription. It appears researchers will multiply the options to fit the level of funding abailable.

    report
    1. Adrian Barnett

      Associate Professor of Public Health at Queensland University of Technology

      In reply to Conor King

      Yes, the variance around older researchers would be less because there will be more data compared with younger researchers. But that would also be true when peer reviewers were making decisions about what young researchers to support.

      I think a lottery amongst equally good proposals is a sensible idea, and it also removes the subjective human element. Lotteries are used in some countries to allocate places on medical degrees to equally qualified applicants. The fairest way to pick apart two equally good proposals is at random.

      report
  5. David Stern

    Professor at Australian National University

    Australia does allocate a large fraction of research funding according to formula via the various block grants schemes. Of course, a major input to that is $ of competitive grants won. In the UK, the REF determines the majority of block grant funding. So this proposal isn't so novel, it would be more of a greater shift towards block grants and away from competitive grants.

    report