Crumbling Impact Factor tablet with modern metrics rising around it.

Is the Impact Factor Impacting You? Why Scientists Are Ditching This Metric

"Discover how the scientific community is moving away from the traditional Impact Factor to embrace more meaningful measures of research influence."


For decades, the Impact Factor (IF) has reigned supreme as a metric to determine the “importance” of scientific journals. Academic institutions have historically relied on it for decisions regarding promotion and tenure, while other organizations have used it to evaluate funding applications. Despite numerous editorials and commentaries highlighting its misuse, the IF continues to persist, often becoming a source of frustration for journal editors due to its inherent inaccuracies and potential for manipulation.

Understanding how the IF is calculated quickly reveals its flaws in accurately assessing a journal's true impact. The IF is determined by dividing the number of citations a journal receives over a two-year period by the total number of articles published in that journal during those years. However, the numerator includes all journal citations regardless of content type, while the denominator only counts primary research and review articles. This 'apples to oranges' comparison creates a skewed representation of a journal's actual influence.

Furthermore, a single highly cited article can disproportionately inflate a journal's IF. There are also ways to manipulate the IF, such as reducing the number of primary research articles published, increasing the publication of review articles (which tend to be cited more frequently), or even negotiating article designations with the company that calculates the IF. These issues highlight the IF's fundamental flaws as a reliable metric.

Why the Scientific Community Is Saying 'No' to the Impact Factor

Crumbling Impact Factor tablet with modern metrics rising around it.

So, how can we diminish the over-reliance on the IF? Casadevall and Fang proposed several strategies, including supporting the San Francisco Declaration on Research Assessment (DORA), boycotting high-IF journals, developing diverse metrics, and reforming review criteria for promotion, tenure, and research funding. The American Society of Microbiology, boasting over 30,000 members, recently removed the IF from more than a dozen of its journals – a move applauded by many.

The Editors-in-Chief of The Journal of Clinical Pharmacology (JCP) and Clinical Pharmacology in Drug Development (CPDD), journals of the American College of Clinical Pharmacology (ACCP), have also taken a stand by proposing the removal of the IF from their respective journal websites. This proposal received unanimous support from the ACCP Publications Committee and Board of Regents, signaling a significant shift in perspective.

  • Inaccurate Representation: The IF calculation method compares citations of all content types (numerator) to only primary research and review articles (denominator), creating a skewed representation.
  • Manipulation Potential: Journals can manipulate their IF by reducing the number of primary research articles, increasing review articles, or negotiating article designations.
  • Disproportionate Influence: A single, highly cited article can significantly inflate a journal's IF, regardless of the overall quality of other publications.
  • Focus on Quantity Over Quality: The IF emphasizes citation counts rather than the actual scientific merit, novelty, or impact of individual articles.
In the 21st century, more relevant metrics, such as full-article downloads, offer a more direct illustration of the impact of published articles. For instance, JCP's full-text downloads have surged by 300% between 2013 and 2016. Similarly, CPDD, indexed in 2016, saw its full-text downloads triple from 2015 to 2016. These numbers suggest a growing shift towards assessing impact based on actual readership and engagement.

Time to Rethink How We Measure Scientific Success?

It's time for readers and fellow editors to critically re-evaluate the use and importance (or lack thereof) of the IF. This move sends a clear message: the IF is outdated and misused, and we need to explore alternative metrics to assess the true importance and scientific impact of research publications.

About this Article -

This article was crafted using a human-AI hybrid and collaborative approach. AI assisted our team with initial drafting, research insights, identifying key questions, and image generation. Our human editors guided topic selection, defined the angle, structured the content, ensured factual accuracy and relevance, refined the tone, and conducted thorough editing to deliver helpful, high-quality information.See our About page for more information.

Everything You Need To Know

1

What exactly is the Impact Factor (IF) and why has it been so widely used?

The Impact Factor (IF) is a metric that has historically been used to determine the "importance" of scientific journals. It has been relied upon by academic institutions for decisions about promotion and tenure, and by organizations for evaluating funding applications. Despite criticisms, the IF has persisted as a standard, although its accuracy and potential for manipulation have been questioned.

2

How is the Impact Factor (IF) calculated, and what are the inherent flaws in its calculation?

The Impact Factor (IF) is calculated by dividing the number of citations a journal receives over a two-year period by the total number of articles published in that journal during those years. One significant flaw is that the numerator includes all journal citations regardless of content type, while the denominator only counts primary research and review articles. This 'apples to oranges' comparison can create a skewed representation of a journal's actual influence. Additionally, a single highly cited article can disproportionately inflate a journal's IF.

3

In what ways can the Impact Factor (IF) be manipulated by journals, and what does this imply about its reliability?

Journals can manipulate their Impact Factor (IF) by reducing the number of primary research articles published, increasing the publication of review articles (which tend to be cited more frequently), or even negotiating article designations with the company that calculates the IF. These potential manipulations highlight the IF's fundamental flaws as a reliable metric of a journal's true impact or the quality of the research it contains. These actions reduce the integrity of the IF as a measure of journal quality and influence.

4

What alternative strategies have been proposed to diminish the over-reliance on the Impact Factor (IF) in assessing research impact?

Several strategies have been proposed to move away from the over-reliance on the Impact Factor (IF). These include supporting the San Francisco Declaration on Research Assessment (DORA), boycotting high-IF journals, developing diverse metrics, and reforming review criteria for promotion, tenure, and research funding. Organizations like the American Society of Microbiology have already taken steps by removing the IF from their journals, signaling a broader shift in how research is evaluated.

5

Besides the Impact Factor (IF), what are some alternative metrics that can offer a more direct illustration of the impact of published articles, and why are they considered more relevant?

More relevant metrics, such as full-article downloads, offer a more direct illustration of the impact of published articles. For example, The Journal of Clinical Pharmacology's (JCP) full-text downloads have significantly increased, as have Clinical Pharmacology in Drug Development's (CPDD). These numbers suggest a growing shift towards assessing impact based on actual readership and engagement, providing a more nuanced understanding of a journal's influence than the Impact Factor (IF) alone. Metrics like download counts directly reflect how often the content is accessed and potentially used by other researchers.

Newsletter Subscribe

Subscribe to get the latest articles and insights directly in your inbox.