Is the Impact Factor Impacting You? Why Scientists Are Ditching This Metric
"Discover how the scientific community is moving away from the traditional Impact Factor to embrace more meaningful measures of research influence."
For decades, the Impact Factor (IF) has reigned supreme as a metric to determine the “importance” of scientific journals. Academic institutions have historically relied on it for decisions regarding promotion and tenure, while other organizations have used it to evaluate funding applications. Despite numerous editorials and commentaries highlighting its misuse, the IF continues to persist, often becoming a source of frustration for journal editors due to its inherent inaccuracies and potential for manipulation.
Understanding how the IF is calculated quickly reveals its flaws in accurately assessing a journal's true impact. The IF is determined by dividing the number of citations a journal receives over a two-year period by the total number of articles published in that journal during those years. However, the numerator includes all journal citations regardless of content type, while the denominator only counts primary research and review articles. This 'apples to oranges' comparison creates a skewed representation of a journal's actual influence.
Furthermore, a single highly cited article can disproportionately inflate a journal's IF. There are also ways to manipulate the IF, such as reducing the number of primary research articles published, increasing the publication of review articles (which tend to be cited more frequently), or even negotiating article designations with the company that calculates the IF. These issues highlight the IF's fundamental flaws as a reliable metric.
Why the Scientific Community Is Saying 'No' to the Impact Factor

So, how can we diminish the over-reliance on the IF? Casadevall and Fang proposed several strategies, including supporting the San Francisco Declaration on Research Assessment (DORA), boycotting high-IF journals, developing diverse metrics, and reforming review criteria for promotion, tenure, and research funding. The American Society of Microbiology, boasting over 30,000 members, recently removed the IF from more than a dozen of its journals – a move applauded by many.
- Inaccurate Representation: The IF calculation method compares citations of all content types (numerator) to only primary research and review articles (denominator), creating a skewed representation.
- Manipulation Potential: Journals can manipulate their IF by reducing the number of primary research articles, increasing review articles, or negotiating article designations.
- Disproportionate Influence: A single, highly cited article can significantly inflate a journal's IF, regardless of the overall quality of other publications.
- Focus on Quantity Over Quality: The IF emphasizes citation counts rather than the actual scientific merit, novelty, or impact of individual articles.
Time to Rethink How We Measure Scientific Success?
It's time for readers and fellow editors to critically re-evaluate the use and importance (or lack thereof) of the IF. This move sends a clear message: the IF is outdated and misused, and we need to explore alternative metrics to assess the true importance and scientific impact of research publications.