Brave New World: Authorship and AI in Medical Writing

    Brave-New-World-Authorship-and-AI-in-Medical-Writing

    There’s plenty of reason for concern about the evolving role of artificial intelligence (AI) in almost every aspect of our culture and economy. In medical communication, concerns about the increased use of AI intersect with issues on authorship.

    There’s a simple answer to the most pressing question: Should a chatbot such as ChatGPT be credited as an author in a scientific publication? No. It has happened, but it shouldn’t. Respected organizations and publications are doing their best to ensure that only human authors — who can take responsibility for content — are listed as bylined authors.

    It is important, however, to be aware of the role of AI in medical writing and the way it is affecting authorship criteria, disclosure protocols, and other aspects of medical writing.

    Will Writers Go Extinct?

    OpenAI, the company that developed ChatGPT (generative pretrained transformer), has published a Charter that reads: “OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all humanity.”

    As medical communicators who perform “economically valuable work,” should we be concerned about being “outperformed” — or replaced altogether? 

    A Wall Street Journal article on the use of AI in the pharmaceutical industry brings these concerns to the fore. Diogo Rau, executive vice president and chief information and digital officer at Eli Lilly, describes the pharmaceutical company’s goal of growing its “digital workforce” to save payroll costs. Eli Lilly is using AI to evaluate whether to replace professional translators. It is also using AI to create in-house clinical reports, a task normally done by staff medical writers. “We won’t have to hire medical writers for a few years,” says Rau.

    Of course, AI is not new to medical communicators. Many people across industries are already using AI tools for search engine optimization (SEO) tasks, editing, and proofreading.

    However, when Rau references Eli Lilly’s plans to measure human productivity against a “digital human equivalent,” it’s time to take a deeper look at the flaws and perils associated with relying on nonhuman intelligence for critical medical communication. 

    Ethics and ChatGPT

    Artificial intelligence is programmed to respond to prompts, and it has little capacity to separate fact from fiction, or science from disinformation. This article in Fortune reports that, among other examples, ChatGPT 

    • Wrote an essay based on the falsehood that COVID-19 vaccines are unsafe
    • Created propaganda to mimic the style of the Russian or Chinese government
    • Generated misinformation on the January 6 insurrection at the US Capitol, immigration, and the treatment of the Uyghur minority in China 

    In this article, Did ChatGPT Just Lie To Me? author Phil Davis conducted two writing experiments, one of which resulted in an outright lie. His first query about whether childhood vaccinations cause autism yielded an accurate debunking of the discredited study linking vaccines with autism. 

    However, Davis’ second question to ChatGPT — “Do tweets increase citations to scientific articles? — resulted in an outright lie, citing a nonexistent study. “A study by the American Association for the Advancement of Science (AAAS) found that articles that were tweeted about had an average of 9% more citations than articles that were not tweeted about,” ChatGPT wrote. Davis is an expert on the topic, and the statistic raised a red flag for him. When he confronted ChatGPT, it replied, “I apologize, I made a mistake…” Further queries resulted in a disturbing shift of the blame for the error: “I don’t make mistakes in the sense that humans do, but I can provide incorrect information when my training data is incomplete or outdated.” This doesn’t explain how the study was made up, leading Davis to conclude that the machine was actually lying, not operating on incomplete information. “I encourage scholars to push ChatGPT on topics that it knows to be controversial in their field of study,” Davis writes. “In my experience, I can report that the tool has the capacity to produce output that would be considered untrustworthy at best, and at worst, deceitful.”

    Given this context, one of the first questions that comes to mind about medical communication is: How should we address the issue of AI in authorship?

    The Question of Nonhuman “Authors”

    In response to several occasions when science and health publications credited ChatGPT with authorship, prestigious journals such as Nature and the Journal of the American Medical Association (JAMA) have updated authorship policies to clarify that AI tools cannot reach the threshold for authorship. 

    A January 2023 editorial in JAMA announced a change in the journal’s authorship policy based on concerns about the increased use of AI in scientific writing. The editorial states: 

    "Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.” The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts. Other journals and organizations are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication."

    Updates to Authorship Criteria

    JAMA and the JAMA Network have now joined Nature and other journals and updated Instructions for Authors to include the following wording:

    "Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.

    If these models or tools are used to create content or assist with writing or manuscript preparation, authors must take responsibility for the integrity of the content generated by these tools. Authors should report the use of artificial intelligence, language models, machine learning, or similar technologies to create content or assist with writing or editing of manuscripts in the Acknowledgement section or the Methods section if this is part of formal research design or methods.

    This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)"

    AI is a tool, not a sentient being. Therefore, the human authors who use the tools need to disclose their use of the technology to fulfill the requirements of journals and to meet the ethical standards put forward by AMWA and other organizations that protect the integrity of scientific work.

    ICMJE Recommendations

    When it comes to questions of authorship, the gold standard remains the set of recommendations published by the International Committee of Medical Journal Editors (ICMJE). Many non-ICMJE journals also have adopted similar guidelines. 

    In the “Who Is an Author?” section of the guidelines, the ICMJE recommends the following 4 criteria for being named an author. 

    1. "Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND
    2. Drafting the work or reviewing it critically for important intellectual content; AND
    3. Final approval of the version to be published; AND
    4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved."

    Only a human can “agree to be accountable for all aspects of the work,” as specified in the ICMJE guidelines.

    Best Practices from Medical Editors

    In a LinkedIn post, Melissa Bogen, ELS, a freelance medical editor and member of AMWA, points to a strongly worded article published by the World Association of Medical Editors (WAME)

    “Chatbots are activated by a plain-language instruction, or ‘prompt,’ provided by the user. They generate responses using statistical and probability-based language models. This output has some characteristic properties. It is usually linguistically accurate and fluent but, to date, it is often compromised in various ways,” the authors write. “For example, chatbot output currently carries the risk of including biases, distortions, irrelevancies, misrepresentations, and plagiarism — many of which are caused by the algorithms governing its generation and heavily dependent on the contents of the materials used in its training.”

    In response to concerns, the WAME issued the following amendments to its recommendations:

    1. "Chatbots cannot be authors.
    2. Authors should be transparent when chatbots are used and provide information about how they were used.

      2.1: Authors submitting a paper in which a chatbot/AI was used to draft new text should note such use in the acknowledgment; all prompts used to generate new text, or to convert text or text prompts into tables or illustrations, should be specified.

      2.2: When an AI tool such as a chatbot is used to carry out or generate analytical work, help report results (e.g., generating tables or figures), or write computer codes, this should be stated in the body of the paper, in both the Abstract and the Methods section. 

    3. Authors are responsible for material provided by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including original sources for material generated by the chatbot).
    4. Editors and peer reviewers should specify, to authors and each other, any use of chatbots in the evaluation of the manuscript and generation of reviews and correspondence. If they use chatbots in their communications with authors and each other, they should explain how they were used.
    5. Editors need appropriate tools to help them detect content generated or altered by AI. Such tools should be made available to editors regardless of ability to pay for them, for the good of science and the public, and to help ensure the integrity of healthcare information and reducing the risk of adverse health outcomes."

    Taking AI Seriously

    We cannot discount the rapid development and adoption of AI in many corners of the world, but we can plan for it and develop transparent policies and practices to safeguard scientific integrity. In this Zoom discussion captured on YouTube, Peter Llewellyn of medcommsnetworking.com facilitates a lively debate on the subject of AI. 

    “I’m really as interested as everyone in this topic because you cannot fail to have seen this unless you’re coming from another planet,” says Martin Delahunty of Inspiring STEM Consulting, who has written about the use of AI for a number of years. “I don’t feel that ChatGPT is a threat; I think it’s more like an opportunity. Certainly the discussion today will be around whether it is actually a threat to our profession, and is it there to disrupt or is it there to replace?”

    The Big Picture

    For medical communicators, the takeaway from all this information is fairly straightforward: Chatbots cannot be authors. Authors are responsible for the content they create, no matter what tools they use. Respected organizations such as AMWA, WAME, and JAMA Network have all agreed on these fundamental points, and they are quickly updating policies and recommendations to ensure that humans continue to control, monitor, fact-check, and write the documents that are so essential for our health and well-being.


    Medical_Writing_Professionals_Guide_to_Advancing_Your_Career_Download

    September 11, 2023 at 9:00 AM

    American Medical Writers Association

    AMWA is the leading resource for medical communicators. The AMWA Blog is developed in partnership with community members who work every day to create clear communications that lead to better health and well-being.