Most authors think of peer review as something that happens to their manuscript after they click submit. The actual mechanics of it, including the specific tools a reviewer might use to read, annotate, or assess the paper, are not something authors have historically had much reason to consider. That is changing. As of 2026, a substantial portion of the peer review literature now concerns how AI tools are being used inside the review process itself, not just in the manuscripts being reviewed. Understanding that distinction, and knowing where major medical journals stand on it, has become a practical part of preparing a submission.
A study published in late 2024 in a JAMA network journal examined the policies of the top 100 medical journals on AI use in peer review and found that 78 of them had issued explicit guidance. Of those 78, 46 journals prohibited AI use outright, while 32 allowed it under specific conditions. The numbers have continued to shift: a cross-disciplinary analysis published in Learned Publishing in 2026 found that between March and August 2025 alone, 24.5 percent of high-impact-factor journals revised their AI peer review policies, with the proportion of journals holding some explicit position rising from 77 to 83 percent. That is a fast-moving policy environment. Authors who have not read a journal's reviewer AI policy are working from an incomplete picture of what happens to their submitted work.
The core question for authors
This is not primarily about whether AI peer review is good or bad. It is about understanding the system your manuscript enters: what protections apply to it, what standards govern the reviewers assessing it, and what signal your own manuscript preparation sends when AI tools are part of the review workflow.
What the ICMJE Now Requires of Reviewers
The International Committee of Medical Journal Editors updated its Recommendations in January 2026 with a new dedicated section, Section V, on the use of artificial intelligence in publishing. The section is divided into guidance for authors, guidance for reviewers, and guidance for editors. The reviewer subsection, Section V.B, sets out the formal standard that ICMJE member journals are expected to apply.
Under Section V.B, reviewers who wish to use AI during the review process must first request permission from the journal. They must maintain the confidentiality of the manuscript, which may prohibit uploading the manuscript to external AI software where confidentiality cannot be guaranteed. If they use AI, they must disclose that use transparently in the review. And they must ensure the appropriateness and validity of the AI output, because as the ICMJE explicitly notes, AI can generate authoritative-sounding text that is incorrect, incomplete, or biased. The instruction to journals is that reviewer guidance documents should address AI use directly, rather than leaving reviewers to figure out the policy from general statements.
These requirements place meaningful obligations on the reviewer side. But they also create something authors can reasonably expect: that journals adhering to ICMJE recommendations will have a policy, will communicate it to reviewers, and will require disclosure of any AI involvement in the assessment. Authors submitting to ICMJE member journals (which include the New England Journal of Medicine, The Lancet, JAMA, BMJ, and others) can therefore expect some minimum standard to apply to how their manuscripts are handled.
Where Journals Draw the Line: Prohibition vs. Conditional Permission
The split between outright prohibition and conditional permission reflects a genuine difference in editorial philosophy. Among the 46 top medical journals that prohibit AI in peer review, confidentiality is the dominant concern: 96 percent of them cite it as the reason for the ban. The worry is specific and practical. When a reviewer uploads a manuscript to a public or commercial AI system, the manuscript content may be retained, used to train future models, or exposed to parties outside the review agreement. An unpublished clinical trial result, a novel mechanism, a case series that has not yet been disclosed, all of that goes into the system without the authors' knowledge or consent. The prohibition is not a statement about AI capability. It is a statement about data control.
Among the 32 journals that allow limited AI use, the conditions typically require the reviewer to confirm that they have not uploaded the manuscript to any public system, to disclose AI involvement in the review, and to take personal responsibility for the accuracy and fairness of the assessment. Some of these journals distinguish between using AI to help structure feedback or check language (which may be acceptable) and using AI to generate the substantive scientific judgment (which generally is not). The precise boundary varies, which is why authors cannot rely on a single general rule.
A practical point worth making: the journals in the prohibition camp tend to be among the highest-profile medical venues. JAMA explicitly prohibits peer reviewers from using AI, citing breach of confidentiality as disqualifying. When you submit to journals of that standing, the formal policy protects you, but it does not guarantee compliance. Journals can enforce what reviewers report; they cannot surveil what reviewers do privately before writing their review.
Publisher-Level Patterns
The 2026 Learned Publishing analysis found consistent differences by publisher. Elsevier and Cell Press tilted toward prohibition, while Wiley and Springer Nature were more likely to allow limited AI use under stated conditions. Mixed publishers, meaning journals that sit across different publishing models, had the highest rate of blanket prohibition. These are tendencies, not universal rules. Individual journals within each publisher group can and do diverge from the house position.
What this pattern reflects is partly risk posture and partly the practical difficulty of defining what "limited use" means when reviewer compliance cannot be verified externally. Elsevier's choice to prohibit is in some ways the easier policy to communicate, even if it may be imperfect in practice. A conditional permission policy requires more precise language, more robust reviewer onboarding, and more detailed disclosure mechanisms. Journals that have invested in that infrastructure are more likely to take the conditional permission route. Journals that have not may default to prohibition as the clearer boundary.
What the publisher-level picture means when choosing a journal
- 1.Elsevier and Cell Press journals are more likely to prohibit reviewer AI use; expect stricter confidentiality protections for your manuscript.
- 2.Wiley and Springer Nature journals are more likely to allow conditional AI use; reviewers may use AI-assisted tools if they meet disclosure conditions.
- 3.ICMJE member journals are expected to have explicit reviewer AI guidance under the January 2026 recommendations.
- 4.Journals without any stated policy are the highest uncertainty position: you do not know what standard applies.
The Confidentiality Problem Authors Rarely Think About
Of the 78 journals in the top-100 study that had issued guidance, 71 specifically prohibited uploading manuscript-related content to AI tools. That is a 91 percent rate among the journals with policies. The concern is well-founded, but it also exposes a gap that authors should think about independently of journal policy.
The standard peer review agreement is between the reviewer and the journal. The reviewer agrees to keep the manuscript confidential. The journal agrees not to share the manuscript outside the review process. What neither party traditionally addresses in detail is what happens if the reviewer, acting in good faith and perhaps unaware of the policy, uses a productivity tool or writing assistant that sends content to an external server. Many common tools do this: browser-based grammar checkers, some citation managers, cloud-based note-taking applications. Uploading a manuscript file to any of these systems may technically violate confidentiality, even without any deliberate intent.
Authors cannot control what individual reviewers do with a manuscript once it is in their hands. But understanding the scale of this problem helps calibrate expectations. If you are submitting unpublished data, novel trial results, or commercially sensitive findings, and if you are concerned about premature disclosure, the journal's reviewer AI policy is one signal among several about how carefully the journal manages manuscript confidentiality overall. Journals with rigorous, explicit policies tend to invest more in reviewer onboarding and monitoring than journals that have not addressed the question at all.
When the Journal Itself Uses AI: The NEJM AI Fast Track
Most of the AI peer review conversation focuses on what individual reviewers do. A parallel development, less widely understood by authors, is what journals themselves are doing with AI as part of their editorial workflow. The clearest example in the medical publishing space is the Human+AI review process introduced by NEJM AI, the AI-focused journal under the New England Journal of Medicine brand.
NEJM AI's Fast Track process is invitation-only, applies to articles that editors have already assessed as having a high likelihood of acceptance, and requires explicit author consent before AI tools are introduced into the review. When authors give that consent, the process works as follows: a human editor writes a complete independent review of the manuscript before any AI input is sought. That review is then compared against structured assessments generated using secure, enterprise versions of GPT-5 (with extended thinking) and Google's Gemini 2.5 Pro. A statistical editor separately engages the same models with an iterative, conversational approach to generate a statistical review. The target is a full initial decision within seven days, substantially faster than conventional timelines.
The published account of the process notes that AI proved particularly effective at checking whether articles followed reporting guidelines and whether reported analyses aligned with the pre-registered Statistical Analysis Plan. That finding has practical implications for authors well beyond NEJM AI. If AI tools, when applied to structured clinical manuscripts, are most reliable at exactly this kind of compliance checking, then sloppy adherence to CONSORT, SPIRIT, or TRIPOD-AI is not just an abstract concern about quality. It is the kind of discrepancy that AI-assisted review is well positioned to surface quickly.
What the NEJM AI Fast Track process signals for manuscript preparation
- AI review tools appear most effective at identifying reporting guideline violations and statistical inconsistencies with registered protocols.
- Manuscripts that diverge from their pre-registered analysis plan without explicit justification are vulnerable to AI-flagged scrutiny.
- Human editors remain the decision-makers; AI contributes structured commentary, not final judgment.
- Author consent is currently required before AI tools are used in this structured way, at least at NEJM AI.
- The Fast Track applies only to a subset of invited submissions, not to general submissions.
It is worth being precise about what the NEJM AI model is and is not. NEJM AI is a distinct journal focused on AI in clinical medicine, not a direct replacement for the main New England Journal of Medicine. The Fast Track process has not been announced for NEJM proper. But the fact that a journal under the NEJM brand is publishing a detailed operational description of AI-assisted review, with authorship from the editorial team, signals that this is considered a serious and scalable model rather than a speculative experiment.
What AI-Assisted Review Means for Reporting Guideline Compliance
The NEJM AI finding that AI was effective at checking reporting guidelines is consistent with what makes structured checklists attractive to automated review in the first place. CONSORT for randomized trials, PRISMA for systematic reviews, STROBE for observational studies, SPIRIT for trial protocols, TRIPOD-AI for AI prediction model studies: these are all structured, verifiable requirements. They produce yes/no or complete/incomplete outputs that a language model can assess reasonably well by comparing the checklist against specific manuscript sections.
Human reviewers vary considerably in how carefully they apply these checklists. Some work through them item by item. Others assess the paper holistically and note the most obvious omissions. AI tools do not get distracted, do not have a preference for the science over the methodology, and do not have a prior relationship with the authors that might soften scrutiny. That is not a description of AI superiority, because a checklist review that catches every omission but misses a fundamental flaw in the study design is incomplete. But it does suggest that the surface-level compliance check is where AI-assisted review adds the most consistent value, and where non-compliant manuscripts are most exposed.
For authors, the practical consequence is that reporting guideline adherence is no longer just a courtesy to reviewers or a journal requirement to satisfy at the revision stage. If AI tools are increasingly part of initial assessment, even informally (a reviewer using an AI assistant to check checklist items before writing their review), then a manuscript that formally attaches the CONSORT checklist but has not actually completed it carefully is a liability, not a formality satisfied.
How to Find a Journal's Reviewer AI Policy
The straightforward way is to check the journal's Instructions for Reviewers, which is distinct from the Instructions for Authors. Many journals publish reviewer guidance separately, often buried deeper in the editorial policies section of the journal website. The term to look for is usually AI, artificial intelligence, or large language model. If you cannot find reviewer guidance at all, that absence is itself informative: it may mean the journal has not addressed the question, which places it in the group where the least certainty applies.
Publisher-level policies are a useful fallback. Elsevier, Springer Nature, Wiley, and BMJ Publishing Group all maintain general AI ethics pages that apply across their titles. These give you the floor: the minimum standard a journal in that publisher group is expected to meet. Individual journals may go further than the publisher baseline, but rarely less far.
ICMJE membership is a reliable proxy for some minimum standard. The 14 member journals of the ICMJE are required to implement the Recommendations, including Section V.B on reviewer AI use. Beyond direct membership, many journals declare that they follow ICMJE recommendations in their author guidelines without being formal members. That declaration is a reasonable signal that the journal at least intends to apply the ICMJE standard to reviewer AI conduct, though following up with the specific reviewer instructions is still worth doing.
One approach that authors rarely think to take: before submitting a paper with significant unpublished data, write a brief pre-submission email asking the editorial office whether the journal's reviewer instructions address AI use. This is a polite, professional question that most editorial offices will answer directly. A journal with a clear, enforced policy will say so. A journal without one will either equivocate or acknowledge the gap. Either answer is useful.
What Authors Can and Cannot Expect
There is a version of this conversation that becomes paranoid quickly: the idea that your manuscript is being ingested by AI systems you did not consent to, generating assessments you cannot see, and that the entire peer review system is being automated without your knowledge. That version overstates what is currently happening.
Most major medical journals, as of mid-2026, are not routinely using AI as a first-pass reviewer for ordinary submissions. The NEJM AI Fast Track is a structured, consent-based exception for a specific subset of manuscripts at a specific journal. Individual reviewer AI use is governed by journal policy, and at the journals that prohibit it, most reviewers comply. The concern is real but proportionate. The journals you are most likely to be targeting for a serious medical manuscript are also the journals that have thought most carefully about this, because they are the ones with the most to lose from a confidentiality failure or a high-profile AI-related retraction.
What authors can reasonably expect is that the landscape will keep changing. More journals will develop AI review tools for internal editorial screening. More reviewer guidance documents will be written, revised, and made publicly available. And the reporting standards for clinical AI research will continue to tighten, with CONSORT-AI, SPIRIT-AI, and TRIPOD-AI extensions becoming more uniformly required at journals that publish those study types.
The practical posture is to treat the current moment as early in a longer transition, rather than as a stable endpoint. Journals that say nothing about AI in peer review now will not be silent about it in two years.
Preparing a Manuscript in an AI-Informed Peer Review Environment
The most actionable thing to take from all of this is about manuscript quality rather than AI policy awareness. The features of a manuscript that perform well in AI-assisted review are largely the features that perform well in careful human review: complete reporting of methodology, close adherence to registered study protocols, explicit handling of reporting guidelines, and rigorous statistical transparency. These are not defensive measures against AI. They are what the best manuscripts always required.
That said, some habits are worth building specifically in response to the current environment. When you submit a clinical trial, check your CONSORT checklist not as a box to tick but as a genuine cross-reference between the checklist items and the specific paragraphs in your manuscript. When you submit a systematic review, walk through the PRISMA 2020 items and confirm that each one is addressed clearly, not just present somewhere in the text. When you submit an AI prediction model study, apply TRIPOD-AI as a pre-submission self-audit. These habits reduce the surface area available for any reviewer, human or AI-assisted, to flag compliance gaps.
If you have deviated from your pre-registered protocol, explain why explicitly in the methods section. The NEJM AI experience found that AI tools are effective at surfacing discrepancies between registration records and reported analyses. If your deviation was legitimate, a clear explanation eliminates the flag before it becomes a comment. If the deviation was not explained or justified, that is a real problem regardless of what tools the reviewer is using.
Finally, if you are also a peer reviewer yourself, which most submitting authors are, know what your target journal's policy says before you review. Using a productivity tool or AI writing assistant while reviewing a manuscript may be common practice in your field, but if the journal prohibits it and you use it anyway, you are exposing yourself to a formal complaint and potentially damaging the author whose paper you were meant to protect. That is a situation worth avoiding with five minutes of reading before you accept the next invitation.
A Note on Transparency Going in Both Directions
The broader conversation about AI in peer review is fundamentally a transparency conversation. Authors are now expected to disclose when they use AI in preparing manuscripts. The expectation is growing, though unevenly, that reviewers disclose when they use AI in assessing those manuscripts. And journals are beginning to disclose when they use AI in their editorial screening processes, as NEJM AI did in its published description of the Fast Track.
That convergence toward transparency is probably the right direction, even if the pace is uneven and the policies are not yet harmonized across the field. The journals that will have the most credibility over the next five years are likely to be the ones that can describe clearly what happens to a manuscript after submission, who or what assesses it, and under what conditions. Authors should look for that clarity when choosing where to submit, and should be appropriately skeptical of journals that remain vague on the question as the norms around them solidify.
For now, the most useful thing you can do is check the reviewer guidance at your target journals before submitting, attend carefully to reporting guideline compliance as though AI may be among the first readers of your methodology, and watch how the major ICMJE members continue to develop their policies over the coming year. This area is moving fast enough that what is current in May 2026 may be different by early 2027. Checking at the time of submission, rather than relying on what you remember from a previous paper, remains good practice.
Further Reading
How to Disclose AI Use in Medical Manuscripts
The author-side equivalent: how to write clear AI disclosure statements for your own manuscript.
The January 2026 ICMJE Update: A Practical Guide for Authors
The full ICMJE 2026 revision, including the new Section V on AI use in publishing.
CONSORT 2025: The Updated Trial Reporting Guideline
AI-assisted review flags reporting guideline gaps first. Get the checklist right from the start.
How to Read Journal Author Guidelines Before You Submit
Finding the reviewer AI policy is one more reason to read the full set of journal documentation before submitting.
Written by Dr. Meng Zhao
Physician-Scientist · Founder, LabCat AI
MD · Former Neurosurgeon · Medical AI Researcher
Dr. Meng Zhao is a former neurosurgeon turned medical-AI researcher. After years in the operating room, he moved into applied AI for clinical workflows and now leads LabCat AI, a medical-AI company working on decision support and research tooling for clinicians. He built Journal Metrics as a free resource for researchers who need reliable journal metrics without paid database subscriptions.
Related Articles
APC Caps and Funder Pullbacks in 2026: What Medical Authors Need to Know
NIH has proposed capping article processing charges at $2,000 to $6,000 per paper. HHMI banned hybrid APC payments in January 2026. Cancer Research UK withdrew all APC funding in April 2026. Here is what these overlapping changes mean for your next manuscript.
16 min readPublishing GuideThe NIH Data Sharing Plan Overhaul: What the May 2026 Format Change Means for Researchers
NIH is replacing its two-page narrative Data Management and Sharing Plan with a streamlined YES/NO format on May 25, 2026. Here is what the new structure requires, how it connects to journal data availability statements, and where the compliance gaps still are.
16 min readPublishing GuideOpen Access Mandates in 2025: What NIH, Wellcome, and UKRI Now Require
The NIH removed its 12-month embargo in July 2025. Wellcome stopped funding hybrid journal APCs in January 2025. UKRI ended support for transformative journals in December 2024. Here is what these changes mean for your next submission.
16 min read