The journal impact factor, despite its limitations, remains one of the most influential metrics in academic publishing. This influence has created powerful incentives for journals to artificially inflate their scores through manipulation tactics. From coercive citation practices to sophisticated citation cartels, the methods journals use to game the system undermine the integrity of scholarly communication and harm researchers who unknowingly publish in manipulated journals.
This comprehensive guide examines the dark side of impact factor culture: the manipulation tactics journals employ, why they do it, how these practices are detected and punished, and most importantly, how researchers can protect themselves. Understanding these issues is essential for navigating the modern publishing landscape and maintaining research integrity.
Critical Warning
Publishing in journals with a history of impact factor manipulation can damage your reputation and career. Suppressed journals may lose indexing, making your work invisible in major databases. This guide helps you identify warning signs before you submit.
What Is Impact Factor Manipulation?
Impact factor manipulation refers to any deliberate editorial practice designed to artificially inflate a journal's citation metrics rather than naturally improving the quality and influence of published research. These practices exploit the mathematical structure of the impact factor calculation while violating the ethical principles of scholarly publishing.
The impact factor measures the average number of citations received by articles published in a journal during a specific time window. It's calculated by dividing citations to recent articles by the number of citable articles published. This seemingly straightforward calculation creates multiple opportunities for manipulation—both in the numerator (increasing citations) and denominator (decreasing article count).
Why Manipulation Matters
Impact factor manipulation isn't merely a technical violation—it distorts the entire research ecosystem. Artificially inflated metrics mislead researchers about journal quality, divert submissions from legitimate journals, waste researchers' time and institutional funds, and ultimately undermine trust in the scholarly communication system. When journals manipulate their metrics, everyone loses except the journals themselves.
The Scale of the Problem
Clarivate Analytics suppresses dozens of journals annually for impact factor manipulation. Studies estimate that 5-10% of journals engage in some form of metric gaming, with the problem particularly acute in certain fields and regions. The actual prevalence may be higher, as detection methods catch only the most egregious cases.
Common Manipulation Tactics
Journals employ various tactics to inflate their impact factors. Understanding these methods helps researchers identify potentially problematic journals and understand why certain editorial requests may be unethical.
1. Coercive Citation
Coercive citation occurs when editors require authors to add citations to the journal as a condition of publication, regardless of relevance. This practice takes several forms, from subtle suggestions during peer review to explicit requirements that authors cite 5-10 articles from the journal before acceptance.
Example scenario: An editor emails: "Your manuscript is provisionally accepted, but please add 3-5 citations to recent articles from our journal to better position your work within our journal's scope." The suggested articles have minimal relevance to your research.
Studies suggest that 5-20% of researchers have experienced coercive citation, with higher rates in certain fields. The practice is particularly insidious because it exploits the power imbalance between editors and authors—researchers desperate to publish may comply rather than risk rejection.
2. Citation Stacking
Citation stacking involves agreements between multiple journals to cite each other's articles, creating a reciprocal citation arrangement that inflates all participating journals' impact factors. Unlike legitimate citations that occur because research is genuinely relevant, stacked citations serve purely to manipulate metrics.
Citation stacking can be detected through network analysis that reveals unusual citation patterns: journals that heavily cite each other despite limited topical overlap, sudden spikes in cross-citations between specific journals, or reciprocal citation rates far exceeding field norms. Clarivate actively monitors for these patterns.
3. Excessive Self-Citation
While some self-citation is natural and appropriate—journals covering specialized topics will naturally reference their own previous publications—excessive self-citation crosses into manipulation. Journals that derive more than 30-40% of their citations from self-citation raise red flags.
Manipulative self-citation strategies include publishing review articles that extensively cite the journal's own papers, encouraging authors to cite previous journal articles regardless of relevance, and publishing editorial content designed solely to generate self-citations. Clarivate tracks self-citation rates and can suppress journals with suspicious patterns.
4. Strategic Article Classification
Impact factor calculation only includes certain article types in the denominator (original research and reviews) but counts citations to all content in the numerator. Journals exploit this by publishing highly-citable content as "editorials" or "letters" that generate citations without being counted as citable items.
For example, a journal might publish what is effectively a major review article as an "editorial perspective" so it generates citations (numerator) but isn't counted in the article count (denominator). This inflates the impact factor ratio. Clarivate has increasingly sophisticated methods to detect misclassified articles.
5. Citation Cartels and Rings
Citation cartels represent the most sophisticated manipulation schemes. Multiple journals, sometimes spanning different publishers and countries, coordinate to systematically cite each other's content. These arrangements can involve explicit agreements or implicit understandings among editors.
A notable example occurred in 2012 when four Brazilian journals were suppressed for participating in a citation cartel where they collectively agreed to cite each other's articles. Network analysis revealed that these journals cited each other far more than their scientific overlap would justify. Such cartels can involve 5-10 or more journals creating complex citation webs designed to evade simple detection.
6. Review Article Proliferation
Review articles typically receive more citations than original research because they synthesize existing knowledge and are frequently cited as background references. Journals seeking to boost their impact factors may publish disproportionate numbers of review articles, sometimes comprising 40-60% of content compared to field norms of 10-20%.
While publishing quality reviews is legitimate, the manipulation occurs when journals accept mediocre reviews solely for their citation potential, solicit reviews from authors willing to cite the journal extensively, or publish "reviews" that are thinly-veiled literature compilations rather than analytical syntheses.
7. Strategic Publication Timing
Since impact factor uses a specific time window (typically 2 years), journals can game the system by manipulating publication timing. Tactics include holding back accepted articles to publish them at strategically advantageous times, rushing highly-citable content into print while delaying less-citable articles, or manipulating online publication dates versus print publication dates.
For example, a journal might delay publishing an entire issue until January rather than December so that those articles have an extra year to accumulate citations before being included in impact factor calculations. While subtle, these timing manipulations can meaningfully affect metrics.
Why Journals Manipulate Their Impact Factor
Understanding the motivations behind manipulation helps explain why this problem persists despite enforcement efforts. Journals face powerful incentives to inflate their metrics.
Attracting Submissions
Higher impact factors attract more manuscript submissions, giving editors greater selectivity and improving perceived prestige. Researchers often filter journal choices by impact factor, so even modest increases can significantly boost submission rates.
Financial Incentives
For subscription journals, higher prestige means higher subscription prices and renewal rates. For open-access journals, prestigious metrics justify higher article processing charges (APCs). Publishers have direct financial motivation to inflate their journals' impact factors.
Institutional Pressure
In some countries and institutions, journal editors face explicit pressure to improve their journal's metrics. Their performance reviews, funding, or even employment may depend on achieving impact factor targets, creating strong incentives for manipulation.
Competitive Dynamics
When some journals manipulate successfully, others face pressure to follow suit or lose competitive position. This creates a "race to the bottom" where ethical journals are disadvantaged relative to manipulating competitors.
The fundamental problem is that impact factors have become disconnected from their intended purpose as rough quality indicators and instead function as high-stakes performance metrics. This transformation creates what measurement scholars call "Goodhart's Law": when a measure becomes a target, it ceases to be a good measure.
How Clarivate Detects and Responds to Manipulation
Clarivate Analytics, which publishes the Journal Citation Reports, has developed increasingly sophisticated methods to detect manipulation and enforce consequences. Understanding these processes helps explain why some manipulating journals get caught while others evade detection.
Detection Methods
Clarivate employs multiple detection strategies working in combination. Statistical anomaly detection identifies journals with unusual citation patterns—sudden spikes, excessive self-citation, or citation rates diverging from field norms. Network analysis reveals citation cartels by mapping unusual citation relationships between journals.
Manual editorial review investigates flagged journals, examining individual articles and citation contexts. Clarivate also receives reports from researchers, reviewers, and editors who witness coercive citation or other manipulation. These reports trigger investigations that can lead to suppression.
Machine Learning Detection
Recent advances incorporate machine learning algorithms that can identify subtle manipulation patterns invisible to traditional statistical methods. These systems analyze citation timing, author-editor relationships, citation context, and numerous other variables to flag suspicious behavior.
Enforcement Actions
When manipulation is confirmed, Clarivate takes several escalating actions. The most common is suppression: the journal's impact factor is not published for one or more years, though the journal remains indexed in Web of Science. This is a serious penalty that signals quality concerns to the research community.
More severe cases result in de-indexing, where the journal is completely removed from Web of Science. This is essentially an academic death sentence—the journal loses prestige, visibility, and often a substantial portion of its author base. Recovery from de-indexing is extremely difficult.
Clarivate publicly announces suppressions and de-indexing decisions, though often without detailed explanations of the specific manipulation detected. This transparency helps warn researchers while potentially protecting journals from detailed reverse-engineering of detection methods.
Limitations of Enforcement
Despite these efforts, enforcement has limitations. Detection systems catch primarily egregious or unsophisticated manipulation. Subtle gaming, particularly by well-resourced journals that understand detection methods, may continue undetected. Additionally, Clarivate's commercial interests in maintaining journal coverage can create conflicts with aggressive enforcement.
The arms race between manipulators and detectors continues evolving. As detection methods improve, manipulation tactics become more sophisticated, requiring ever-more advanced detection in turn.
Notable Cases: Journals Suppressed or Banned
Examining specific suppression cases illustrates the variety of manipulation tactics and their consequences. These examples serve as cautionary tales for journals tempted to game the system and warning signs for researchers.
The 2020 Mass Suppression
In 2020, Clarivate suppressed over 50 journals in a single announcement, the largest single enforcement action in impact factor history. The suppressions spanned multiple publishers and countries, indicating widespread citation manipulation across the ecosystem. Many suppressed journals were from emerging economies where impact factor pressure is particularly intense.
The suppressed journals included titles from major publishers, demonstrating that manipulation isn't limited to predatory or low-tier journals. Several had respectable impact factors before suppression, showing that superficial metrics don't guarantee integrity.
The Brazilian Citation Cartel
In 2012, four Brazilian medical journals were suppressed after investigators discovered coordinated citation stacking. The journals had agreements to cite each other's articles, with citation rates between the journals far exceeding scientific justification. The cartel collapsed when network analysis revealed the suspicious citation pattern.
This case was notable because it involved respected national journals, not predatory publishers. It demonstrated that institutional pressure to improve national journal rankings could drive even established journals to manipulation.
Coercive Citation Cases
Several journals have faced suppression specifically for coercive citation practices. These cases typically come to light through researcher complaints rather than statistical detection. When multiple authors report receiving identical coercive citation requests, investigations ensue.
A pattern in these cases is that journals often defend the practice as "helping authors understand the journal's scope" or "ensuring proper contextualization," euphemisms that don't hide the manipulation intent. Researchers who receive such requests should document them and consider reporting to Clarivate.
Impact on Authors
When a journal is suppressed or de-indexed, articles published there may lose visibility and prestige. Authors can't retroactively withdraw publications, making journal selection critical. Some tenure and promotion committees discount publications in suppressed journals, potentially harming careers.
The Role of Review Articles in Boosting Impact Factor
Review articles deserve special attention in manipulation discussions because they occupy a gray area between legitimate editorial strategy and problematic gaming. Understanding this nuance helps researchers distinguish between journals publishing quality reviews and those exploiting reviews for metric manipulation.
Why Reviews Generate Citations
Review articles synthesize existing research, making them valuable references for literature review sections, introductory background, and establishing context. A well-crafted review in an active field might receive 50-200 citations, compared to 5-20 for typical original research articles. This citation multiplier makes reviews attractive to impact factor-conscious journals.
Legitimate vs. Manipulative Review Publication
Publishing quality reviews serves the research community by synthesizing fragmented literature and identifying research gaps. Journals focused on reviews (like Annual Reviews, Nature Reviews series) provide valuable services. The line crosses into manipulation when journals compromise review quality for citation potential.
Warning signs include soliciting reviews primarily from authors with large publication portfolios who will cite extensively, accepting mediocre reviews that would be rejected by rigorous review journals, publishing reviews on topics tangential to the journal's scope solely because they'll be highly cited, or dramatically increasing review article proportion relative to field norms.
The "Review Bomb" Strategy
Some journals employ what researchers call "review bombing"—publishing multiple lengthy reviews in a short period specifically to boost citations. These reviews often have 100+ references and synthesize entire subfields. While individually legitimate, the collective strategy aims primarily at metric gaming rather than serving scholarly needs.
Identifying this tactic requires examining trends over time. A journal that suddenly increases review article output by 200-300% may be review bombing. Similarly, reviews that heavily cite the journal itself or citation cartel partners suggest manipulation rather than comprehensive synthesis.
Impact on Researchers and the Academic Field
Impact factor manipulation creates far-reaching consequences extending beyond the journals themselves. Understanding these impacts helps explain why manipulation is an urgent problem requiring collective action.
Career Consequences
Researchers who unknowingly publish in manipulating journals face career risks. If the journal is subsequently suppressed or de-indexed, publications there may be discounted in tenure evaluations, grant applications, or job searches. Early-career researchers are particularly vulnerable, as they may lack experience identifying problematic journals.
Some institutions explicitly devalue publications in suppressed journals, treating them similarly to predatory journal publications. While unfair to authors who published before problems were publicly known, this practice reflects the reputational damage associated with compromised journals.
Distorted Literature and Citation Patterns
Manipulation distorts the scholarly literature itself. Coercive citation creates artificial citation patterns where influential papers go uncited while marginally relevant articles receive citations. This noise in citation data degrades the usefulness of bibliometric analysis and literature mapping.
Citation cartels create echo chambers where clusters of journals cite each other while ignoring external research. Researchers using bibliometric tools to identify relevant literature may be misled by these artificial patterns, potentially missing important work while finding irrelevant articles.
Resource Misallocation
When artificially inflated impact factors mislead researchers about journal quality, resources flow inefficiently. Researchers submit to journals based on false prestige signals, wasting time in inappropriate submission processes. Libraries subscribe to journals based partly on impact factors, potentially allocating budgets to manipulating journals while cutting legitimate ones.
Erosion of Trust
Perhaps most damaging, widespread manipulation erodes trust in scholarly publishing infrastructure. If researchers can't trust impact factors to roughly indicate journal quality, the entire metrics-based evaluation system becomes suspect. This trust erosion affects not just impact factors but other bibliometric measures and even broader scholarly communication systems.
Public trust in science also suffers. When manipulation cases receive media attention, they feed narratives about academic corruption and compromise the perceived integrity of research institutions.
How to Identify Potentially Manipulated Journals
While definitive identification requires investigative resources beyond individual researchers' capacity, several warning signs can alert you to potential manipulation. Due diligence before submitting can protect your career and research reputation.
Quantitative Warning Signs
- •Excessive self-citation: If the journal derives more than 35-40% of citations from itself, investigate further. Check the Journal Citation Reports for self-citation rates.
- •Dramatic impact factor increases: A journal whose impact factor doubles or triples in a single year may be manipulating rather than genuinely improving. Sustainable growth is typically gradual.
- •High IF relative to peers: If a journal's impact factor significantly exceeds similar journals in the same field, question whether the difference reflects quality or manipulation.
- •Unusual article type distribution: Journals publishing 40%+ review articles when field norms are 10-15% may be manipulating through reviews.
- •Inconsistent metrics: Compare impact factor with other metrics like CiteScore, SJR, or h-index. Large discrepancies suggest one metric may be manipulated.
Qualitative Warning Signs
- •Citation requests during review: Any suggestion or requirement to add citations to the journal should raise red flags, especially if the recommended articles aren't clearly relevant.
- •History of suppression: Check whether the journal has been previously suppressed by Clarivate. Past manipulation suggests ongoing problems.
- •Weak editorial processes: Suspiciously fast acceptance, minimal peer review feedback, or editors who aren't recognized experts in the field may indicate journals prioritizing publication volume over quality.
- •Reputation among peers: Ask experienced colleagues about journals you're considering. Field-specific knowledge often identifies problematic journals before formal suppression.
Due Diligence Checklist
- ✓ Check the journal's self-citation rate in JCR
- ✓ Review impact factor trends over 5+ years
- ✓ Compare IF with alternative metrics (CiteScore, SJR)
- ✓ Search for suppression history or investigations
- ✓ Examine recent issues for article type distribution
- ✓ Read editorial policies about citations and references
- ✓ Consult colleagues with field expertise
- ✓ Check publisher reputation and ethics history
Reform Efforts and Alternative Metrics
Recognition of impact factor manipulation has catalyzed various reform efforts, from improved enforcement to development of alternative metrics less susceptible to gaming. Understanding these developments helps contextualize ongoing debates about research evaluation.
Enhanced Enforcement and Transparency
Clarivate has significantly strengthened enforcement in recent years. Annual suppressions have increased, detection methods have improved, and public reporting has become more detailed. These improvements demonstrate recognition that manipulation threatens the credibility of the entire journal citation system.
Additionally, Clarivate now publishes more detailed journal metadata, including self-citation rates and citation network information. This transparency enables researchers to conduct their own evaluations rather than relying solely on impact factor rankings.
Alternative Bibliometric Indicators
Recognition of impact factor limitations has driven development of alternative metrics. CiteScore, calculated by Scopus using a longer time window and different methodologies, provides an alternative perspective less susceptible to some manipulation tactics. The SCImago Journal Rank (SJR) weights citations by source prestige, making citation cartels among low-prestige journals less effective.
The h-index and related metrics evaluate journals based on citation distribution rather than averages, reducing the impact of a few highly-cited outliers. Altmetrics track social media mentions, downloads, and other impact indicators beyond citations, capturing different dimensions of influence.
No single alternative metric solves all problems, but using multiple metrics in combination provides a more complete picture and makes manipulation more difficult. Journals that appear strong across multiple independent metrics are more likely to be genuinely influential.
Institutional Policy Changes
Many institutions have reformed evaluation policies to reduce impact factor emphasis. The San Francisco Declaration on Research Assessment (DORA), signed by thousands of institutions worldwide, commits signatories to eliminating impact factor use in hiring, tenure, and promotion decisions and focusing instead on the scientific content of publications.
Implementation has been uneven—many DORA signatories continue using impact factors informally—but the declaration represents growing recognition that journal-level metrics shouldn't determine individual researcher evaluation.
Open Peer Review and Citation Analysis
Some reform advocates promote open peer review, where reviewer reports are published alongside articles. This transparency could expose coercive citation requests and other manipulation tactics. Similarly, making citation contexts available—showing the text surrounding citations—would enable analysis of whether citations are meaningful or perfunctory.
These reforms face resistance from traditional publishers and researchers who value anonymity in peer review, but pilot programs at various journals are testing their feasibility and effectiveness.
What Researchers Should Do
Individual researchers can take concrete actions to protect themselves from manipulation-related harms and contribute to broader reform efforts. These recommendations balance pragmatic career management with ethical scholarly citizenship.
Before Submitting Manuscripts
Conduct due diligence on target journals using the warning signs discussed earlier. Don't rely solely on impact factor—consult multiple metrics, examine recent issues, and seek advice from experienced colleagues. If a journal's metrics seem too good to be true, they probably are.
Check whether the journal has been previously suppressed or investigated. Simple web searches combining the journal name with terms like "suppression," "manipulation," or "retraction" can reveal problems not apparent from official metrics.
During Peer Review
If editors or reviewers request you add citations to the journal, carefully evaluate whether the suggestions are scientifically appropriate. Legitimate suggestions should directly relate to your research question and methods. If requests seem designed to inflate citations, document them and consider whether to comply.
You have several options when facing coercive citation: comply minimally if citations have some justification, politely decline and explain why citations aren't relevant, withdraw the manuscript and submit elsewhere, or report the practice to Clarivate or relevant professional societies.
Reporting Manipulation
Clarivate accepts reports of suspected manipulation. If you experience coercive citation or observe other problematic practices, consider reporting them. While individual reports may not trigger immediate action, patterns across multiple reports initiate investigations.
Professional societies and publishers often have ethics reporting mechanisms. Anonymous reporting is typically possible if you fear retaliation. Your reports contribute to broader enforcement efforts protecting other researchers.
Supporting Reform
Advocate within your institution for evaluation policies that don't over-emphasize impact factors. Support DORA principles and similar initiatives promoting responsible metrics use. When serving on evaluation committees, push for holistic assessment that examines research quality directly rather than using journal metrics as proxies.
Mentor early-career researchers about these issues. Help them understand manipulation risks and develop sophisticated journal selection skills. Your guidance can prevent career-damaging mistakes and promote ethical publishing practices.
Practicing Ethical Citation
Model ethical citation practices in your own work. Cite comprehensively based on scholarly relevance rather than journal prestige. Resist pressure to inflate reference lists with marginally relevant citations. When reviewing manuscripts, suggest citations based solely on scientific merit.
These individual actions, multiplied across the research community, create cultural norms that make manipulation more difficult and easier to detect.
Conclusion: Navigating a Manipulated Landscape
Impact factor manipulation represents a systemic problem in scholarly publishing, driven by powerful incentives and enabled by the disconnect between metrics and quality. While enforcement has improved, manipulation persists because the fundamental pressures—journal competition, career advancement tied to prestige signals, institutional ranking systems—remain unchanged.
For researchers navigating this landscape, awareness is the first defense. Understanding manipulation tactics, recognizing warning signs, and conducting due diligence before submitting manuscripts can prevent career-damaging mistakes. When you encounter coercive citation or other unethical practices, reporting them contributes to broader enforcement efforts that protect the entire research community.
Longer-term solutions require cultural change in how research is evaluated. Moving beyond simplistic journal metrics to holistic assessment of research quality, rigorously evaluating the scientific content rather than publication venue, and using multiple diverse indicators rather than single numbers would reduce manipulation incentives. These changes are happening—slowly—but require sustained advocacy and institutional commitment.
The impact factor, despite its flaws and susceptibility to manipulation, won't disappear soon. But by understanding its limitations, recognizing gaming tactics, and demanding accountability from journals and metrics providers, researchers can make informed decisions that protect their work while contributing to a healthier scholarly communication ecosystem.
Key Takeaways
- • Impact factor manipulation is widespread, using tactics from coercive citation to citation cartels
- • Journals manipulate primarily to attract submissions and justify higher subscription/APC prices
- • Clarivate suppresses dozens of journals annually, but detection is imperfect
- • Publishing in manipulated journals can damage your career if they're later suppressed
- • Warning signs include excessive self-citation, dramatic metric changes, and citation requests during review
- • Use multiple metrics, not just impact factor, to evaluate journals
- • Report manipulation when you encounter it to support enforcement efforts
- • Support institutional reforms reducing reliance on journal metrics for evaluation
Check Journal Metrics
Search our database to evaluate journals using multiple metrics and identify potential warning signs before you submit.
Search JournalsRelated Articles
How to Avoid Predatory Journals
Identify and avoid predatory publishers targeting researchers.
How to Choose the Right Journal
Evidence-based strategies for selecting publication venues.
Understanding JCR Quartiles
How journal rankings and quartiles are determined.
5-Year Impact Factor Explained
Understanding longer-term citation metrics.