Artificial intelligence tools are increasingly being applied in the development and use of medicines. Here, we provide a regulatory perspective on such applications.
The ability of artificial intelligence (AI) tools to generate new insights and improve processes could enhance the development and regulation of medicines in areas ranging from preclinical research to pharmacovigilance. Many applications of AI tools have the potential to impact the benefit–risk ratio of medicinal products, and so intersect with the mandate of regulatory agencies such as the European Medicines Agency (EMA) to assure the quality, safety and efficacy of medicines (Supplementary Table 1). For example, an AI tool could be used to predict the relationship between different patient characteristics and a medicine’s safety and efficacy, and thereby help optimize the patient populations in clinical trials accordingly. Other uses could include AI-based classification of individual case safety reports by seriousness, or supporting medicine administration; for example, a digital insulin pump that uses AI as part of its administration, monitoring or feedback control. AI applications, therefore, have great potential for public and animal health.
The appropriate use of AI in medicines development and use poses regulatory challenges, and addressing these challenges requires regulators to adapt and to keep abreast of this evolving field. Furthermore, the mandate of the EMA includes consultation during the regulation of certain medical devices and companion diagnostics, which can rely on AI. Consequently, within the European Medicines Regulatory Network (EMRN), building a regulatory capability for AI features in the joint Heads of Medicines Agencies (HMA)/EMA Big Data Task Force recommendations and the EMA Network Strategy to 2025.
To prepare and adapt to AI, there have been several EMRN initiatives, for example:
• Establishment of EMA task forces on data analytics and methods, and on digital transformation
• Establishment of an EMA Analytics Centre of Excellence to build regulators’ understanding of AI and apply it to support regulatory processes
• The 2021 Joint HMA/EMA Workshop on Artificial Intelligence in Medicines Regulation
In regulating AI, we will be guided by four principles. First, we will follow science and technology, ensuring that our decisions are evidence-based. Second, we will leverage the efforts and expertise of different stakeholders including industry, academia and patients. Third, we will work to bridge between domains, including between medicines and devices regulation. Fourth, we will align with our international partners whenever possible, striving for harmonization.
These principles, the ongoing initiatives above and the public health need are the basis for this article, which sets out the EMA’s perspective on AI.
Key points in the EU perspective on AI
A clear framework will be beneficial for medicines whose benefit–risk ratio relies on digital technologies, including AI. Owing to the increasing importance of AI across sectors, new European Union (EU) legislation that relates to AI is forthcoming, with potential impacts on AI use in medicines development. This includes the AI Act, Data Governance Act, the European Health Data Space and the Pharmaceutical Strategy for Europe. We believe that a legal–regulatory framework will evolve that is beneficial for medicines whose benefit–risk ratio relies on AI, to enable appropriate regulatory requirements to be established. This could include AI used to generate evidence for regulatory assessment, or to support the use of medicines.
This framework could include a risk categorization for digital technologies that impact a medicinal product’s benefit–risk ratio. A risk-based evaluation would ensure regulatory requirements scale according to impact, and could be based on EU medical device classifications. The classification could be specific to the types of AI technologies used, and how AI is used; for example, whether there is human oversight, safeguards, self-learning or device autonomy.
Regulatory access to the underlying algorithms and datasets is necessary when AI impacts the benefit–risk ratio of a medicinal product. To validate AI use that potentially impacts the benefit–risk ratio of a medicinal product or its evaluation, regulators’ access to the underlying algorithm’s code and datasets feeding into the AI and algorithms is necessary. The range of datasets that may need to be assessed include, but are not limited to, clinical data, health care system data, drug use data, insurance data and farm management data in the case of veterinary products. Legal–regulatory provisions may benefit from being streamlined or strengthened.
The post-authorization management of medicines may need updating. Due to the rapid, potentially unpredictable and opaque nature of AI updates, the post-authorization management of medicines may need updating. This could concern the variation framework, which governs the updates of authorized medicines, and would accommodate updates to AI software linked to a medicinal product; for example, an insulin pump that uses AI to learn a patient’s response and recalculate dosing may require regular software updates to implement changes that may be difficult via the current variation framework. There may also be an advantage to guidelines defining major versus minor updates, in a risk-based approach, for all digital tools that impact the quality, safety or efficacy of a medicinal product and thus are linked to its benefit–risk ratio.
EU roadmap for regulation of AI for medicines
The changes set out above would enable a regulatory framework for acceptance and use of AI to be laid down in guidance for medicines. This should ensure the protection of participants in trials and patients treated in routine care, and be adaptable to innovation. A roadmap (‘Reflection paper’) towards EU guidance for AI in medicines development and use will be developed by the EMRN. This could include areas for which new or updated guidance is a priority and how the guidance should be informed through involvement of stakeholders and international partners. The resulting guidance should incorporate AI aspects related to algorithm transparency, explainability, interpretability, performance, validity (construct, content, external and so on), oversight, uncertainty, alternatives, ethical aspects and reliability.
The regulatory framework for AI that is being built will need continual updating to ensure it fits the rapidly evolving AI landscape. One promising approach to aid this could be the regulatory ‘sandbox’, as envisaged in the European Commission’s Pharmaceutical Strategy for Europe and the AI Act. This ‘sandbox’ could be a safe harbour for multi-stakeholder collaboration on innovations that challenge the regulatory system, such as the use of AI. It could enable such innovation to be used in medicines development with strict oversight yet flexible requirements, and inform any updates and/or technical annexes to the legislation, based on the experience gained. For example, the use of AI to continually improve the pharmacovigilance of an approved medicine could be enabled with the ‘sandbox’ and the regulatory framework updated based on this experience.
Owing to the novelty and potential complexity of using AI in medicines development and regulation, particularly when it is linked to the benefit–risk ratio of a medicinal product, early interaction with national regulators and the EMA is encouraged.
To chart an EU approach to AI and the above roadmap, in line with the workplan of the Big Data Steering Group, we are working with our partners in the EMRN and plan to launch a stakeholder consultation in the coming months. In delivering this roadmap, and helping shape an EU approach to AI, we will enhance the development and supervision of medicines using AI for the benefit of patients and in support of innovation.