Few things are changing faster in the research world than publishing. The “open access” movement recognises that publicly-funded research should be freely available to everyone. Now more than a decade old, open access is changing where researchers publish and, more importantly, how the wider world accesses – and assesses – their work.
As the nation’s medical research funding body, we at the National Health and Medical Research Council (NHMRC) mandate that all publications from research we’ve funded be openly accessible. We and the government’s other key funding organisation, the Australian Research Council, are flexible on how it’s done, as long as the paper is made available.
Researchers may opt for “green” self archiving, where they publish in a restricted journal and archive a freely available version, or “gold” access, which allows readers to freely obtain articles via publisher websites.
Most Australian medical research publications will be available through university repositories and by researchers submitting to journals with copyright agreements that support the NHMRC open access policy. The university librarians have been especially helpful in ensuring that the institutional repositories are ready for this revision to the policy.
Consumer groups want direct access, as soon as possible, to the findings of research - after all, they pay for it through taxes and donations to charities. This information helps in a time when we’re bombarded with health messages of sometimes dubious origin and where vested interests are often not disclosed.
In 21st century medical research, consumers and patients group members are often integrally involved in the research itself and are important messengers between researchers and the community.
Death of the journal impact factor
The open access movement is having a significant impact too on how we measure the impact of scientific research.
For too long, the reputation of a journal has dominated publication choice – and the reputation has been mainly determined the journal impact factor. This metric reflects how frequently the totality of a journal’s recent papers are cited in other journals.
The journal impact factor dominated how universities, research institutes and research funding bodies have judged individual researchers for many years. This has always been a mistake - the importance of any individual paper cannot be assessed on the the citation performance of all the other papers in that journal. Even in the highest impact factor journals, some papers are never cited by other researchers.
The NHMRC moved away from using journal impact factors in 2008. So it was good to see the San Francisco Declaration on Research Assessment, which has now been signed by thousands of individual researchers and organisations, come out with such a strong statement earlier this year:
Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion or funding decisions.
Measuring real impact
In health and medical research, choosing where best to publish a paper involves so much more than just the prestige (or impact factor) of the journal. In clinical and public health sciences, authors will want the right audience to read the article.
When published in a surgical journal, for instance, the findings of surgical research will influence the thinking of more people, and more of those who should read the research, than when published in a more general journal, even if the latter had a higher impact factor. Similarly, public health researchers want to publish where public health policymakers and officials will read the article.
A single paper in the wide-reaching Medical Journal of Australia – which could change health policy and practice affecting thousands of Australians – may be of greater impact than a paper in a high-impact journal that very few people read.
All this has implications for peer review and the judgement of “track record”. Judging the achievements of researchers must amount to much more than simply counting the number of publications and noting the journals’ impact factors.
I agree with Science editor-in-chief Bruce Alberts who recently argued that formulaic evaluation systems in which the “mere number of a researcher’s publications increases his or her score” creates a strong disincentive to pursue risky and potentially ground-breaking research because it may be years before the first research papers begin to emerge.
The NHMRC has long asked applicants to identify their best five papers and in 2014, we will be asking applicants to identify them in their track record attachment. This will make it easier for reviewers to look at these and evaluate them.
In the words of Bruce Alberts, analysing the contributions of researchers requires
the actual reading of a small selected set of each researcher’s publications, a task that must not be passed by default to journal editors.
There is one other potential implication of focusing on quantity rather than quality. It often alleged (though the evidence is scant) that the pressure to produce many papers and to publish in journals with high impact factors explains some of the apparent increase in publication fraud in medical research.
This may or may not be true, but focusing more on the quality of a few papers, rather than just counting the total number of publications and being overly influenced by the reputation of the journal, can help ameliorate the publish-more-and-more syndrome.
Nothing stays the same in science and research. Publishing is set to change further. The democratisation of publishing began with the internet and has a long way yet to run. The challenge for researchers, institutions and funders will be to identify, protect and encourage quality and integrity.
Read more Conversation articles on open access here.