Publishing Ethics

How to Disclose AI Use in Medical Manuscripts: A 2026 Practical Guide

What medical journals now expect when authors use ChatGPT, Claude, or other AI tools during manuscript preparation, and how to write a disclosure statement that will not stall your submission.

MZ
Dr. Meng Zhao|Physician-Scientist · Founder, LabCat AI
Published: April 202618 min readPublishing Ethics

Two years ago, most medical authors could quietly use a writing tool to tidy up a methods section without thinking twice about it. That window has closed. As of 2026, nearly every major biomedical publisher has a written policy on generative AI, and many journals now screen submissions for undisclosed AI assistance as part of their routine editorial checks. If you are preparing a clinical study, a review, or a case report and planning to use any AI tool during writing, you now need a disclosure strategy before you start, not after you submit.

The tricky part is that the rules are genuinely moving. The ICMJE, COPE, WAME, and the big publishers each phrase their expectations a little differently, and medical specialty journals layer their own preferences on top. What one editor treats as routine language polishing, another treats as substantive authorship and wants declared in the methods. The safest posture for authors is to assume that any AI use above basic spellcheck should be disclosed, and to practice writing the disclosure cleanly so it reads as transparency rather than confession.

Working Principle

Treat AI disclosure like conflict of interest reporting. You do not have to stop using the tool. You have to be specific about what you used, where you used it, and what a human still verified before the paper went out.

Why the Rules Tightened So Fast

Medical publishing took the generative AI wave harder than most fields. Within a year of ChatGPT's release, editors were already seeing manuscripts with invented references, confidently wrong dosages, fabricated trial registration numbers, and phrasing that was obviously lifted from a chatbot without checking. The response has not been a ban. The response has been a shift toward mandatory, specific disclosure, because outright prohibition is both unenforceable and unrealistic given how widely these tools are already used in translation, editing, and literature searching.

The other pressure point is authorship itself. Medical journals take authorship seriously because signing your name to a clinical paper carries accountability that other fields do not always match. A large language model cannot consent to authorship, cannot take responsibility for conclusions, and cannot answer a post-publication integrity query. That is why every major medical publishing body has converged on the same position: AI tools are not authors. A human must remain accountable for every claim in the paper. Disclosure exists so readers know where the human stopped and the tool started.

If you are irritated by these policies, try to read them from the editorial side instead. Editors cannot verify AI use from a submission file. They rely on you to describe it honestly. The papers that have been retracted so far for AI-related problems almost always failed at the disclosure step first, and the substantive mistake (an invented citation, a miscopied number, a hallucinated guideline) followed from treating the tool as trustworthy output rather than draft material.

What Actually Needs to Be Disclosed

Not every interaction with AI tooling needs a line in your methods. The current consensus, reflected in ICMJE updates and the latest publisher policies, separates AI use into tiers. Basic grammar and spelling correction by tools like classic Grammarly or Word's built-in editor does not generally require disclosure, because it functions the way a proofreader does and does not originate text. Anything that drafts, rewrites, summarizes, translates, or analyzes content is different. That use belongs in a statement.

The practical test is whether the tool generated or transformed meaning rather than merely checked spelling. If you asked a chatbot to rephrase a paragraph so it read better, that is a transformation. If you used an AI tool to translate your methods from another language into English, that is a transformation. If you used an AI assistant to summarize your results into an abstract, that is a transformation, and a particularly sensitive one. If you used a tool to help interpret your data, generate figure captions, classify free text, or draft a literature review, those should all be named in the appropriate section of the paper.

Uses that currently require disclosure

  • 1.Drafting any part of the manuscript body, abstract, or discussion.
  • 2.Summarizing or paraphrasing your own prior writing into new text.
  • 3.Translating manuscript content between languages.
  • 4.Generating figures, illustrations, graphical abstracts, or icon sets.
  • 5.Analyzing data, coding qualitative responses, or classifying records.
  • 6.Suggesting references or literature, even if you then verified them manually.
  • 7.Rewriting passages for tone, clarity, or concision beyond sentence-level grammar.

One gray area worth naming: purpose-built academic editors such as Paperpal, Writefull, and Trinka occupy a middle ground. Because these tools are trained on scholarly corpora and marketed as grammar and style refinement, some journals treat their output as equivalent to professional human editing and ask for a note in the acknowledgements rather than the methods. Others still want them listed alongside general-purpose chatbots. If you are unsure, disclose. A line in the acknowledgements has never rejected a paper. Hidden AI use has retracted them.

Where the Disclosure Belongs in the Manuscript

Authors often lose points not because they failed to disclose, but because the disclosure ended up in the wrong place. Journals increasingly want AI disclosures in specific locations depending on the type of use. Reading the target journal's latest instructions for authors closely is the only way to get this right, but there is a workable default pattern that most publishers accept.

Use of AI for analysis, data processing, classification, or any computational step that shaped the results belongs in the Methods section. This is the same principle that applies to any other tool or algorithm that influenced the outputs you are asking reviewers to evaluate. Use of AI for drafting, rewriting, or translating the manuscript text itself belongs in either the Acknowledgements or a dedicated statement near the end of the paper, depending on the journal's template. Many journals now include a specific field in their submission system labeled something like Generative AI Use Declaration, and whatever you enter there should match what appears in the manuscript.

Cover letters are not the right place for this disclosure, even though authors sometimes tuck it there as a courtesy. The cover letter is read once and then lives outside the published record. Your AI disclosure needs to travel with the paper after publication, which means it belongs in the body of the manuscript where future readers can find it.

How to Write a Clean AI Disclosure Statement

The best disclosures sound boring. They name the tool, say what it was used for, and confirm that the authors reviewed and take responsibility for the output. They do not apologize, justify, or hedge. Reviewers are more suspicious of defensive language than of matter-of-fact admission.

Template: Language refinement only

During the preparation of this manuscript, the authors used ChatGPT (OpenAI, GPT-4.5) to improve language and readability of the discussion section. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

Template: Translation assistance

The methods and results sections of this manuscript were initially drafted in Mandarin by the authors and translated into English using DeepL Pro. All translated text was reviewed by two bilingual authors and revised for clinical accuracy. The authors take full responsibility for the final content.

Template: Computational use in analysis

Free-text patient-reported outcome responses (n = 1,248) were initially classified into symptom categories using Claude 3.7 Sonnet (Anthropic) with a structured prompt developed by the study team. A random 15 percent sample was independently re-classified by two clinicians, with inter-rater agreement calculated against the model's labels (κ = 0.82). Discrepant cases were resolved by consensus and the final dataset reflects the human-adjudicated labels.

Notice what these three examples have in common. They name the exact tool with version or vendor. They name the exact section or dataset it touched. They describe how humans remained in the loop. They close with responsibility. A disclosure that includes these four elements rarely causes editorial concern, even when the underlying AI use is extensive.

Known Differences Among Major Publishers

Most of the major medical publishers now publish their AI policy on a standing page, and these are worth bookmarking because they are revised regularly. Elsevier, Springer Nature, Wiley, Taylor and Francis, Sage, and BMJ have all issued guidance that agrees on the core points: AI cannot be an author, substantive AI use must be disclosed, and authors are fully accountable for any output introduced into the manuscript. The differences show up in the surrounding details.

Some publishers require a standalone statement at the end of the manuscript, typically titled Use of Generative AI or similar. Others ask for inclusion in an existing acknowledgements or methods section. A few flagship medical journals go further, and require authors to confirm in the submission system checklist that they have not used AI to generate images or figures without explicit approval from the journal. If you are submitting to JAMA, the New England Journal of Medicine, The Lancet, or BMJ, read the current instructions at the time of submission rather than relying on what you remember from a previous paper. These four in particular have tightened their language more than once in the past 18 months.

Specialty journals can be stricter than their parent publisher. A general Elsevier policy might accept a short acknowledgement line, while a clinical oncology journal under the same imprint might require a fuller methods-style description because manuscripts in that field often involve prompt-engineered classification of adverse event reports. If the specialty journal says more, follow the specialty journal.

Images, Figures, and the Harder Line

Disclosure rules for AI-generated text are relatively permissive. Disclosure rules for AI-generated images are not. Most medical publishers currently prohibit or strongly restrict the use of generative AI to create scientific figures, clinical illustrations, anatomical drawings, photographs of patients, pathology slides, radiology images, and anything that could be interpreted as primary data. The concern is obvious. A synthesized image of a tumor margin or a fabricated histological section is not a stylistic choice. It is fabrication.

Where AI-generated visuals are allowed, they are typically restricted to schematic illustrations, graphical abstracts, or conceptual diagrams that do not represent real patient data. Even in those cases, you should disclose the tool used and confirm that no real patient or clinical image was used to train or prompt the generation. When in doubt on the image question, contact the editorial office before submission. Images are one of the few areas where a polite pre-submission email is common, welcome, and will not cost you editorial goodwill.

What Happens When AI Use Is Not Disclosed

The honest summary is that most undisclosed AI use never gets caught. Current AI detection tools work on probabilities and are not accurate enough to trigger a rejection on their own. But the cases that are caught are caught because something else in the paper went wrong first. Hallucinated citations are the most common trigger, followed by suspicious phrasing patterns that reviewers flag, and then by post-publication comments from readers who recognize generated prose. When disclosure was missing, the consequence is rarely just a correction. It usually escalates to expression of concern or retraction, because the journal cannot verify what else might be unreliable.

This is why the safer default is to disclose modestly rather than hide comprehensively. A reader who learns in the acknowledgements that you used a language tool to polish your English is not going to doubt your data. A reader who discovers through a retraction notice that the methods section had been ghostwritten by a chatbot is going to doubt everything.

A Pre-Submission Self-Check for AI Disclosure

Before you submit, walk through the paper with the specific aim of mapping AI use. Do not rely on memory. Ask every co-author what tools they used at what stage, because the corresponding author is not always the person who ran a translation pass or prompted a chatbot for a summary. Write it down. Then match each use against the journal's current policy and decide where it belongs in the manuscript.

Questions to ask every co-author before submission

  • Did you use any AI tool to draft, rewrite, summarize, or translate text that ended up in the manuscript?
  • Did you use any AI tool to analyze, classify, or process data reported in the paper?
  • Did you use any AI tool to suggest or generate references or literature?
  • Did you use any AI tool to create, alter, or enhance figures or images?
  • Did anyone on the team use a specialized medical writing assistant such as Paperpal, Writefull, or Trinka, and at what stage?
  • If so, can you name the specific tool, version, and date it was used?

Once you have the list, write your disclosure block and put it through the same review cycle you would use for any other substantive section of the paper. This is not administrative text. It is part of the scientific record.

A Note on Reviewer Use of AI

Disclosure goes both directions now. If you are invited to peer review a medical manuscript, most journals prohibit uploading the manuscript or any identifiable portion of it into a public AI tool, because that breaks confidentiality. Some publishers have begun to explicitly ask reviewers to confirm they did not use a generative AI tool to draft or summarize their review. The logic is the same as for authors. A peer review signed by a reviewer is supposed to represent the reviewer's own assessment. If you let a chatbot produce it, you have outsourced something non-transferable.

If you have been trained to use AI as part of your normal workflow, this can feel inconvenient. It is still the current rule, and the policy rationale is reasonable. Treat review invitations as you would treat any other form of privileged material, and do not paste the manuscript into external systems.

Where This Is Heading

The current policies are not final. Over the next year or two, expect clearer tiering between background use (grammar checks, reference formatting, automated transcription) and substantive use (drafting, translation, data classification). Expect better structured metadata fields in submission systems, so disclosure is captured cleanly rather than buried in free text. Expect more medical journals to begin running routine screening tools at submission, not because those tools are accurate, but because their presence is a deterrent. And expect specialty journals to diverge further from their publisher-level policies, especially in fields like radiology, pathology, and clinical genomics where AI is already embedded in the underlying research.

None of this should be alarming to researchers who are using these tools carefully. The authors who are going to run into trouble are the ones who treat disclosure as optional or who assume policies will relax before anyone notices their paper. The ones who will be fine are the ones who treat AI the way they would treat any other methodological choice: name it, describe it, and stand behind it.

If you get into the habit of writing your AI disclosure before you start drafting, rather than after you finish, you will spend less time editing it later and more time on the science that actually carries your paper.

Further Reading

MZ

Written by Dr. Meng Zhao

Physician-Scientist · Founder, LabCat AI

MD · Former Neurosurgeon · Medical AI Researcher

Dr. Meng Zhao is a former neurosurgeon turned medical-AI researcher. After years in the operating room, he moved into applied AI for clinical workflows and now leads LabCat AI, a medical-AI company working on decision support and research tooling for clinicians. He built Journal Metrics as a free resource for researchers who need reliable journal metrics without paid database subscriptions.

Related Articles