Menu Close
It’s time to let the journal impact factor die. Ben McLeod

Do not resuscitate: the journal impact factor declared dead

Science is a highly competitive business so measuring the impact of scientific research, meaningfully and objectively, is essential. The journal impact factor (JIF) has emerged over the past few decades as the most used scientific metric for the assessment of research quality.

As a research scientist, Medical Research Institute Director and former Editor-in-Chief of a scientific journal, I have to confess my own way too common use of the JIF, and to delight when that particular parameter fell in my favour.

But the truth is the JIF has major flaws.

There are better ways to gauge the impact of a piece of research, the quality of an individual researcher and even the quality of a peer-review journal. The San Francisco Declaration on Research Assessment attempts to formally address the deficiencies in the JIF measurement and proposes the adoption of different practises in assessing quality of research publications.

MervC

More on the declaration later - but first, some discussion of the JIF. What is it and what’s the problem with using it?

Working the JIF

Impact factors were first devised by American scientist Eugene Garfield in 1955. The current JIF system is a measure of how frequently recently published papers from a particular journal are cited (referenced in another body of work).

Hence, it is said to be a measure of the “impact” of the research published in that journal.

Technically, it is the number times in any given year that papers published in the previous two years were cited. For example, a journal with an impact factor of 10 in 2012 means the papers published in that journal in 2010 and 2011 were cited an average of 10 times each in 2012.

On the face of it, this should be a good measure of the scientific quality of a journal, but even in this regard the JIF has only limited value.

The JIF can be greatly skewed by an extraordinarily highly cited individual paper. It also does not take account of the different sizes of particular scientific disciplines nor the fact that review articles tend to get cited more often than primary research articles.

These and other deficiencies mean that the JIF is not only a blunt metric when it comes to assessing the quality of a journal but it is open to manipulation. Journals may decide to publish on certain topics or certain article types (such as reviews) to maximise their JIF.

Peter Kaminski

So what are the options?

While JIF may have some limited value in assessing journal quality, for the individual or individual piece of research the JIF is even less reliable.

In the context of peer-reviewed publications at least, what’s important is the frequency with which others cite that individual’s publication, preferably in a way that considers the variability of the size of different scientific fields.

One day, it even may prove to be significant to give “bonus marks” for individuals publishing highly cited papers in low impact journals. Such metrics are being developed and are becoming more commonplace.

Claire_Sambrook

One example is the h-index, proposed by physicist Jorge E. Hirsch in 2005, which is a metric that links both the number of papers an author has published with the number of times those papers have been cited. While emerging metrics themselves have deficiencies (for example your h-index generally gets better as you get older!) they are part of an important trend.

Looking beyond citations

The Declaration on Research Assessment is signed by an impressive list of influential individual scientists and organisations as well as the editors-in-chief of many major journals, including Science.

To some extent the declaration states what many in research in this country already know and have begun applying to their judgements of individual researchers and of the quality of scientific studies. However, the declaration is probably that clear line in the sand that the scientific world needed in relation to JIFs.

It is the next development that will be the most interesting: a time where we look beyond simple publication metrics to judge more fully the impact of a piece of research. For example:

  • what resources (online or otherwise) were produced as a result of the work?
  • how many people accessed these resources?
  • what impact did the work have on policy and practice?
  • what was the economic, social or environmental benefit of the work?

Therefore, the significance of the San Francisco Declaration on Research Assessment needs to be seen not simply as announcing the death of the JIF, but also as a step along a pathway to a more enlightened method of assessing the impact of research.

Want to write?

Write an article and join a growing community of more than 182,300 academics and researchers from 4,941 institutions.

Register now