Artificial intelligence (AI) and the subcategory machine learning (ML) are evolving quickly, and regulators around the world are working to respond to new opportunities while mitigating potential risks involving the technology.
The European Union appears to be ahead of the United States when it comes to regulating and addressing AI in the pharmaceutical industry. However, the US Food and Drug Administration (FDA) is stepping up to provide discussion and involve multiple stakeholders in adapting its regulatory environment to include uses of AI.
For medical communicators, it is important to know where to find guidance and even-handed discussions of the ethical use of AI in the pharmaceutical industry.
The EU Artificial Intelligence Act
The EU Artificial Intelligence Act (AIA) is a groundbreaking regulation framework that groups AI applications into 3 risk categories: unacceptable risk, high-risk, and a third, unregulated category. “The EU AI Act could become a global standard,” the EU Artificial Intelligence Act site reads, adding that Brazil passed a bill to regulate AI in 2021.
The EU AI Act Compliance Checker is a tool that helps stakeholders understand their responsibilities and obligations.
The EU: European Artificial Intelligence Act
Since 2021, the EU has proposed potential rules and restrictions on the use of AI to try to make sure that devices are safe and patients protected while still encouraging innovation. The European Commission regulates AI medical software in the EU in the same way it regulates other medical software. The rules on regulating software are spelled out under Regulation EU 2017/745 on medical devices (MDR), with several classes tied to the potential impact of the software on health and safety.
Artificial Intelligence Workplan to Guide Use of AI in Medicines Regulation
The European Medicines Agency (EMA) has created a workplan to help the European Medicines Regulatory Network (EMRN) create guidelines and support for the ethical use of AI. “The European medicines regulatory network’s (EMRN) vision on AI is for a regulatory system harnessing the capabilities of AI for personal productivity, process automation and systems efficiency, increased insights into data and strengthened decision-support for the benefit of public and animal health,” the Introduction to the document reads.
The document focuses on four dimensions: guidance, policy and product support; tools & technologies; collaboration and change management; and experimentation. It will be updated by the HMA-EMA Big Data Steering Group.
The US FDA
The US Food and Drug Administration (FDA) reports an increase in submissions that include AI and ML in many different aspects of drug development: drug discovery, clinical research, postmarket safety surveillance, and pharmaceutical manufacturing. “To meet these challenges, FDA has accelerated its efforts to create an agile regulatory ecosystem that can facilitate innovation while safeguarding public health,” the agency’s website states.
The FDA’s Center for Drug Evaluation and Research (CDER) worked with the Center for Biologics Evaluation and Research (CBER) and the Center for Devices and Radiological Health (CDRH) to create several publications.
- AI/ML for Drug Development Discussion Paper is not a guidance or policy document, but it provides an important view of the landscape of current and future uses of AI/ML, considerations for using the technologies, and discussion among stakeholders on issues related to the use of AI in drug development.
- The CDER Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative includes a publication titled Artificial Intelligence in Drug Manufacturing. It is not a regulatory document, but it is being used as the agency develops guidelines. Importantly, this paper includes relevant examples of the ways that AI will be used in pharmaceutical manufacturing, including
- Process design and scale-up
- Advanced process control
- Process monitoring and fault detection
- Trend monitoring
The US: The FDA Digital Health Center of Excellence
In the United States, the FDA created the Digital Health Center of Excellence to address the need for oversight and awareness regarding AI and other digital advances. The FDA promotes transparency principles for the use of AI or machine learning, and the site includes information on Transparency for Machine Learning–Enabled Medical Devices: Guiding Principles. Two important principles are principle 7: Focus is placed on the performance of the human-AI team and principle 9: Users are provided clear, essential information. “Transparency is essential to patient-centered care and for the safety and effectiveness of a device,” the principles state. “The transparent and consistent presentation of information, including known gaps in information, can have many benefits.”
The US: FDA Recommendations
In the United States, the FDA seeks to follow the classification of the International Medical Device Regulators Forum (IMDRF), which categorizes software as a medical device (SaMD) according to the level of risk and significance of the information provided by the software. For example, is it used to treat or diagnose, drive clinical management, or inform clinical management?
Device Regulation: Comparing the US and the EU
An article in the May 22 edition of the Journal of Medical Device Regulation explores the similarities and differences in regulating the use of AI in medical devices and SaMD in the US and the EU.
“Several standards (or parts thereof) are available to help regulate AI or SaMD in medical devices across the globe,” the authors write. “Recently, however the regulatory controls for these products have been evolving, becoming more stringent and adding the concept of a quality system as mandatory in response to growing concerns that medical devices and diagnostics using these technologies, especially those that pose a high risk to the human body, are not currently subject to adequate control before being used by or on patients.”
The article urges manufacturers to consider both the individual and the societal impacts of AI.
The piece outlines some of the major challenges regarding safety, scientific validity, and lack of medical expertise among companies that develop AI and machine learning systems. Further, the involvement of clinical experts and health care practitioners often occurs late in the development stage, which can delay a device getting to market.
The authors suggest the need for well-defined guidelines, increased knowledge of the requirements for a Quality Management System, and direct relationships between engineers and biological experts.
The authors also point to several international standards that relate to various aspects of AI-based devices or diagnostics.
Recent Literature
A July 2024 article in Regulatory Focus explores the AIA’s legal framework for regulating AI in medical devices, in vitro diagnostic devices (IVDs), and other products.
This post from the IQVIA blog predicts explosive growth of AI in health care and also states, “A large segment of AI’s use in health care would be classified as ‘high-risk’ under the [AIA] Act and thus subject to multiple requirements if it is developed or deployed within the EU.”
A Collaborative Approach to Best Practices
As regulations and guidelines continue to evolve, agencies in the EU and the US are both working to address principles of safety, privacy, and efficacy while still encouraging innovation in the industry.
This article in Regulatory Focus describes some best practices for AI development, such as using cross-disciplinary teams that include lawyers and regulatory experts. “It should also include responsible AI accountability policies, a risk management framework, a central repository of AI/ML applications and a repository of algorithms to ensure that ‘we all know how to use this technology.’”
Rose Purcell, director of global regulatory policy and innovation at Takeda, says that best practices do not have to be written “from scratch”; existing standards and agencies already include the foundation of best practices.
For medical communicators getting familiar with the uses of AI in medicine, the AI Tip Sheet for Medical Writers provides help in navigating the AI landscape.
AMWA acknowledges the contributions of Madison Hedrick, MA, for peer review in the development of this AMWA resource.