Ethical AI in Medical Writing: How to Harness Its Power Responsibly

    Ethical AI in Medical Writing: How to Harness Its Power Responsibly
    9:31

    An-Ethical-Approach-to-Harnessing-the-Power-of-AI-for-Medical-Writing

    Artificial intelligence (AI) can analyze diagnostic test results, identify cancer cells, and help construct cohorts for clinical trials, but writing for a medical journal or a regulatory agency is more than this technology can accomplish without its human counterpart.

    Medical writing remains a distinctly human discipline. Most major medical journals limit authorship to humans because AI is a tool designed to support, not replace, human expertise, judgment, and accountability.At the same time, AI is rapidly transforming many aspects of medical communication. For medical writers, generative AI (GenAI) tools, such as large language models (including generative pre-trained transformer [GPT]-based chat systems) offer powerful opportunities to decrease the “time to insights,” streamline workflows, and support the synthesis of vast amounts of information.

    This blog post highlights how medical writers and editors can use AI ethically and responsibly. It addresses how AI can add real value, where clear boundaries are required, and how to align AI use with journal policies and professional standards, protecting scientific integrity.

    What Is Ethical Use of AI in Medical Writing?

    Ethical use of AI in medical writing refers to the responsible, transparent, and accountable use of AI tools to support human expertise. This should be the foundational expectation for all writers and editors, complementing the tools chosen for the work needed.

    When the correct tool is fit for the task at hand, AI can enhance process efficiency, messaging clarity, and overall document consistency while preserving the medical writer’s central role in using scientific judgment and working responsibly.

    Ethical use of AI means that

    • Human authors retain full responsibility for accuracy, integrity, and originality
    • AI-generated outputs are verified, edited, and contextualized by qualified medical writers and subject matter experts
    • The use of AI aligns with journal policies, regulatory requirements, applicable laws, and professional standards
    • AI use is disclosed when required

    Medical writers can use AI in medical communication to assist them in their work. For example, GPTs can help writers with grammar, language choice, reference identification, content refinement, and drafting summaries. Unlike a human collaborator, an AI tool can be used to revise content repeatedly as the content develops.

    Unethical use of AI can occur when tools are treated as authors or when AI content is accepted without critical review. In medical and scientific communication, AI is best understood as an assistive technology, not a sole creator.

    Medical Writers and AI Fundamentals

    Two widely cited definitions of AI are described by J. Kelly Byram, MS, MBA, ELS, in an AMWA Journal article, “Communicating About and With Artificial Intelligence Applications”:

    1. A machine performing a task requiring human intelligence
    2. A machine replicating human intelligence

    As Byram notes, AI applications developed so far lack human traits like creativity, empathy, judgment, and accountability. As a result, AI tools augment human capabilities rather than replace them. Notably, such augmentation can be either favorable or unfavorable, results that cannot be overlooked.

    To use these tools responsibly and to evaluate their capabilities and limitations, medical communicators need a basic understanding of how different types of AI work.

    Machine Learning (ML)

    Machine learning is the foundation for many of the AI applications used in medical communication. ML systems build on prior knowledge, learn from data patterns, and adapt as they process new information.

    Deep Learning (DL)

    Deep learning is a subset of machine learning that uses multilayered artificial neural networks inspired by the function and structure of the human brain. Byram writes that DL systems can learn “nonlinear, high dimensional relationships” from large, multimodal datasets. The ability of these models to generate internal decision pathways contributes to the well-known “black box effect” that can limit transparency and trust.

    Large Language Models (LLMs)

    Large language models, including GPT-based applications like ChatGPT and Dall-E, are trained on vast amounts of text and can generate human-like language in response to prompts.

    While these tools can significantly enhance process efficiency and clarity, their use also introduces important scientific and ethical challenges.

    Challenges in Using AI in Medical Writing

    AI Hallucinations and Scientific Risks

    Publicly available GPTs and custom models are trained on internet data, which is rife with inaccuracies. These tools cannot distinguish fact from fiction and often fabricate statistics or citations.

    Beyond accuracy concerns, GenAI can also introduce bias into health communication through multiple mechanisms, including training data that may not include diverse populations, as discussed in a 2025 AMWA Journal article “Artificial Intelligence Bias in Health Communication: Risks and Strategies for Medical Writers.” This inequitable lens risks reinforcing or worsening health disparities.

    For these reasons, medical writers must always verify the accuracy of AI-assisted output. As AI becomes more embedded in medicine and science, new guidance and tools continue to emerge to help ensure accuracy and integrity in published work. However, staying current with evolving best practices for ethical use of AI is essential. Resources such as the US FDA and European Medicine Agency’s Guiding Principles of Good AI Practice in Drug Development provide a strong starting point.

    Why AI Cannot Be an Author

    Another limitation of GenAI in medical communication is that it cannot meet the requirements for authorship established by leading journals and organizations, including the Journal of the American Medical Association (JAMA). An analysis of author guidelines from 100 medical journals found that 80% permit only human authors and prohibit AI from authorship, according to a 2025 AMWA Journal article, “A Comparative Analysis of Author Guidelines on the Use of Generative Artificial Intelligence for Manuscript Preparation in the top 100 Medical Journals.”

    These journal policies align with widely accepted authorship criteria. The International Committee of Medical Journal Editors (ICMJE) specifies 4 criteria for authorship, including giving final approval of the manuscript and agreeing to be accountable for all aspects of the work. Another AMWA Journal article, “Current Guidelines on the Use of Generative Intelligence in Peer-Reviewed Scholarly Publications,” reports that both the World Association of Medical Editors (WAME) and a group of editors of bioethics and humanities journals concur that GenAI chatbots cannot be authors.

    The ICMJE criteria underscore why GenAI cannot qualify as an author: It cannot approve a final manuscript, respond to questions about data integrity, or be held accountable for published content.

    Disclosure and Confidentiality of AI in Medical Writing

    Some AI-derived tools, such as spelling and grammar checks and reference organizers, are widely used and generally do not require disclosure. However, using GenAI to generate outlines, drafts, or manuscripts may trigger disclosure requirements.

    Journals may require authors to disclose AI tools used, including the model name and version, the nature of the content generated or edited, and relevant prompts.

    Confidentiality is also critical: Medical writers should never enter proprietary, confidential, or copyrighted information into AI systems unless explicitly approved, licensed for copyrighted use, and secure.

    These disclosure and confidentiality expectations reflect a broader concern: trust. In a 2025 AMWA Journal article, “Trust, Artificial-Intelligence Generated Images, and Health Communication Policy,” Abbie Miller, MS, MWC, notes that health communication depends on authenticity. She reports that an iStock/Getty survey found that 90% of respondents want to know whether an image was created using AI technology.

    Next Steps for Medical Writers and AI

    As AI use increases in medical communication, professional responsibility remains firmly with human writers and editors. AMWA offers resources to support medical communicators in understanding the impact of these technological advances, including the member-exclusive AI Tip Sheet for Medical Writers. Results from the 2025 member survey on generative AI use also appear in the AMWA Journal, offering additional insight into how medical communicators are using GenAI tools.

    Medical writers are encouraged to stay informed about emerging guidance and to apply GenAI deliberately, transparently, and in alignment with ethical and scientific standards that sustain trust in medical communication.


    AMWA acknowledges the contributions of Jill Sellers, BSPharm, PharmD, RPh, and Dominic De Bellis, PhD, ACRP-CP, for peer review in the development of this AMWA resource.

    The AI Tip Sheet for Medical Writers is an AMWA Members-Only Resource.

    If you're a member of the American Medical Writers Association (AMWA), click below to submit your AMWA member profile email to download and view it. 

    March 23, 2026 at 9:00 AM

    American Medical Writers Association

    AMWA is the leading resource for medical communicators. The AMWA Blog is developed in partnership with community members who work every day to create clear communications that lead to better health and well-being.