Sections

Services

Information

UK United Kingdom

Quality not quantity: measuring the impact of published research

Few things are changing faster in the research world than publishing. The “open access” movement recognises that publicly-funded research should be freely available to everyone. Now more than a decade…

Judging the achievements of researchers should be much broader than just looking at their publications. Image from shutterstock.com

Few things are changing faster in the research world than publishing. The “open access” movement recognises that publicly-funded research should be freely available to everyone. Now more than a decade old, open access is changing where researchers publish and, more importantly, how the wider world accesses – and assesses – their work.

As the nation’s medical research funding body, we at the National Health and Medical Research Council (NHMRC) mandate that all publications from research we’ve funded be openly accessible. We and the government’s other key funding organisation, the Australian Research Council, are flexible on how it’s done, as long as the paper is made available.

Researchers may opt for “green” self archiving, where they publish in a restricted journal and archive a freely available version, or “gold” access, which allows readers to freely obtain articles via publisher websites.

Most Australian medical research publications will be available through university repositories and by researchers submitting to journals with copyright agreements that support the NHMRC open access policy. The university librarians have been especially helpful in ensuring that the institutional repositories are ready for this revision to the policy.

Initiatives such as PubMed Central (PMC) and European PMC are also making it easier to access published research.

Consumer groups want direct access, as soon as possible, to the findings of research - after all, they pay for it through taxes and donations to charities. This information helps in a time when we’re bombarded with health messages of sometimes dubious origin and where vested interests are often not disclosed.

In 21st century medical research, consumers and patients group members are often integrally involved in the research itself and are important messengers between researchers and the community.

Death of the journal impact factor

The open access movement is having a significant impact too on how we measure the impact of scientific research.

For too long, the reputation of a journal has dominated publication choice – and the reputation has been mainly determined the journal impact factor. This metric reflects how frequently the totality of a journal’s recent papers are cited in other journals.

The journal impact factor dominated how universities, research institutes and research funding bodies have judged individual researchers for many years. This has always been a mistake - the importance of any individual paper cannot be assessed on the the citation performance of all the other papers in that journal. Even in the highest impact factor journals, some papers are never cited by other researchers.

Consumer groups want direct access, as soon as possible, to the findings of research. Image from shutterstock.com

The NHMRC moved away from using journal impact factors in 2008. So it was good to see the San Francisco Declaration on Research Assessment, which has now been signed by thousands of individual researchers and organisations, come out with such a strong statement earlier this year:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions.

Hear hear.

Measuring real impact

In health and medical research, choosing where best to publish a paper involves so much more than just the prestige (or impact factor) of the journal. In clinical and public health sciences, authors will want the right audience to read the article.

When published in a surgical journal, for instance, the findings of surgical research will influence the thinking of more people, and more of those who should read the research, than when published in a more general journal, even if the latter had a higher impact factor. Similarly, public health researchers want to publish where public health policymakers and officials will read the article.

A single paper in the wide-reaching Medical Journal of Australia – which could change health policy and practice affecting thousands of Australians – may be of greater impact than a paper in a high-impact journal that very few people read.

All this has implications for peer review and the judgement of “track record”. Judging the achievements of researchers must amount to much more than simply counting the number of publications and noting the journals’ impact factors.

I agree with Science editor-in-chief Bruce Alberts who recently argued that formulaic evaluation systems in which the “mere number of a researcher’s publications increases his or her score” creates a strong disincentive to pursue risky and potentially ground-breaking research because it may be years before the first research papers begin to emerge.

Researchers want to publish where policymakers and officials will read the article. Image from shutterstock.com

The NHMRC has long asked applicants to identify their best five papers and in 2014, we will be asking applicants to identify them in their track record attachment. This will make it easier for reviewers to look at these and evaluate them.

In the words of Bruce Alberts, analysing the contributions of researchers requires

the actual reading of a small selected set of each researcher’s publications, a task that must not be passed by default to journal editors.

There is one other potential implication of focusing on quantity rather than quality. It often alleged (though the evidence is scant) that the pressure to produce many papers and to publish in journals with high impact factors explains some of the apparent increase in publication fraud in medical research.

This may or may not be true, but focusing more on the quality of a few papers, rather than just counting the total number of publications and being overly influenced by the reputation of the journal, can help ameliorate the publish-more-and-more syndrome.

Nothing stays the same in science and research. Publishing is set to change further. The democratisation of publishing began with the internet and has a long way yet to run. The challenge for researchers, institutions and funders will be to identify, protect and encourage quality and integrity.

Read more Conversation articles on open access here.

Join the conversation

11 Comments sorted by

  1. Mike Swinbourne

    logged in via Facebook

    I think your last sentence hit the nail on the head Warwick.

    Open access is to be encouraged, but the danger will be that it will allow any old dross to be published. A quick look around the internet will show how biased advocacy organisations regularly publish "research" that would never make it into a peer reviewed journal, because it is far removed from real research and should never be read by anyone.

    There still needs to be a system where we can identify quality research.

    report
    1. Sue Ieraci

      Public hospital clinician

      In reply to Mike Swinbourne

      I suspect the pendulum might swing back. When the open access market is so overloaded that one cannot easily find the good quality research, there might be a trend back to grouping or highlighting of the higher quality stuff back into separate journals or spaces.

      report
    2. Elizabeth Bathory

      9-5 project drone.

      In reply to Mike Swinbourne

      Hi Mike,

      I am just about to submit a manuscript for publication in an open access journal, and I can advise you that at least in this journal, the peer-review process still applies as it would if the journal was not open access, and I suspect this will be the case in the main.

      There have long been journals that do not undertake a peer review process - these are, as you say, less highly regarded in the hierarchy of "evidence". However, I think open access is an important advance in bridging the gap between academia and the general public, and (I hope) will lead to more informed citizens with a better ability to critically appraise information that's presented to them.

      Open access does not mean a free for all.

      report
  2. Christopher Pitt

    logged in via Facebook

    Is there a standardised way in the research community of assessing impact of a particular paper? If someone claims that their particular research or thesis is widely used or critically acclaimed, is there a way that this can be independently validated? As a clinical doctor and spare-time researcher, I often read statements made by various groups claiming to have made a "cutting edge breakthrough", and although my gut feeling is that their "breakthrough" is wishful thinking or exaggeration, I'm not sure how to confirm my empirical assumption.

    report
    1. Sue Ieraci

      Public hospital clinician

      In reply to Christopher Pitt

      Christopher - as a sometimes-reviewer of papers for either publication, training or for grants and awards, I go back to the criteria that are used to adjudicate papers. Some organisations have a check list. In general, they are things like:
      - Is the research question well-defined?
      - Is the existing knowledge on that topic adequately reviewed?
      - Does the methodology adequately match the aim(s) or question?
      - Is there sufficient power/numbers/sampling?
      - Are there adequate controls or methods…

      Read more
    2. Christopher Pitt

      logged in via Facebook

      In reply to Sue Ieraci

      Thanks Sue. I agree with your points, and although I'm somewhat epidemiologically-challenged, I always look for those qualities you describe.

      I think my original question relates in part to article-based metrics. Specifically, I am currently reviewing a book in which the author claims that the theory she developed for her PhD has been influential throughout the world, although when I looked up her PhD on Google Scholar, there were no listed citations. Is Google Scholar a reliable statistical measure of impact?

      What are your thoughts?

      report
    3. Gavin Moodie
      Gavin Moodie is a Friend of The Conversation.

      Adjunct professor at RMIT University

      In reply to Christopher Pitt

      In my experience Google Scholar is a good start for getting citations about individual publications, scholars, centres or institutions. However, Google Scholar generates some false matches and so the results often need to be cleaned. Anne-Wil Harzing's publish or perish is a good tool for this

      http://www.harzing.com/pop.htm

      In the particular case you refer to the citations may not be to the PhD but to other publications in which the author disseminated the theory she developed for her PhD. So I would do a Google Scholar search on the author's name and see what publications it found. I would then do a citation count for the publications which seem to be relevant.

      report
    4. Sue Ieraci

      Public hospital clinician

      In reply to Christopher Pitt

      Hi, Christopher,

      I think it's really hard to assess the impact of a piece of research on practice within a specialised area unless you have direct knowledge of current thinking in that field of practice.

      Certainly Google Scholar is a good start, but publications and citations still don't tell you whether a piece of information or a new approach have been incorporated into practice - that would be a real-world measure of impact.

      If I were looking for a change of practice outside my direct area of experience, I would either ask someone in that area if the information had changed practice in their area of expertise, or search for the subject matter in the literature rather than the author;s name.

      report
    5. Christopher Pitt

      logged in via Facebook

      In reply to Sue Ieraci

      Sue and Gavin, thank you and thank you. Gavin, I downloaded Publish or Perish as you suggested. It turned up three articles which used the key phrase that this author used for her theory, none of which have been cited in the last 17 years. This confirms my manual skimming of a basic google scholar search I performed, so it's a great tool. In your experience, do you see many examples of PoP or Google Search inaccuracies or is it fairly robust.

      Thanks for your insight and suggestions!

      report
    6. Gavin Moodie
      Gavin Moodie is a Friend of The Conversation.

      Adjunct professor at RMIT University

      In reply to Christopher Pitt

      I haven't tested Google Scholar or Publish or Perish extensively. I have found both good but need checking to ensure that publications ascribed to Christopher Pitt, Chris Pitt, C Pitt, etc, are all referring to the same person.

      From my reading of the literature there is cautious interest in Google Scholar and results are sometimes reported in contrast with more established (and far more difficult and time consuming) bibliographic methods. Most Google Scholar results are higher than the better…

      Read more
  3. ǝɔɹǝıԀ uɥoſ

    Speech Pathologist

    Great article and great decision by NHMRC - I think the JIF was the biggest barrier to using open access.

    report