Understanding the FDA’s Draft Guidance on Artificial Intelligence to Support Regulatory Decision-Making

Authors: Zhang S

The pharmaceutical industry is on the brink of a transformative shift, with artificial intelligence (AI) increasingly being leveraged across the drug product lifecycle. Recognizing this, the U.S. Food and Drug Administration (FDA) released a draft guidance for industry and other interested parties entitled, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products” (1), aimed at helping sponsors and stakeholders navigate the use of AI for submissions to regulatory bodies. This comprehensive document lays out a structured framework to ensure AI-driven tools are credible, reliable, and effective when used to support regulatory decisions concerning drug safety, efficacy, and quality. 

If you’re a scientist or industry professional exploring AI applications in pharmaceutical development, this draft guidance offers relevant insights. Here, The Regulatory Strategies Center of Excellence (RS COE) at Simulations Plus distills its key recommendations and discusses some of the implications. 

 

A Seven-Step Framework for AI Credibility 

At the heart of the FDA’s guidance is a risk-based credibility assessment framework designed to establish and evaluate the credibility of AI models. The risk-based credibility assessment framework is not new and has been applied in many other types of models, especially highly complex models, such as quantitative systems pharmacology (QSP) models. More details on the risk-based assessment framework can be found in the recently published ICH M15 Guideline on general principles for model-informed drug development (MIDD) (2), previous FDA publications (3, 4), as well as published NDA (New Drug Application) / BLA (Biologics License Application) reviews (5) 

In the section that follows, each step in the risk-based credibility assessment framework is detailed, highlighting the key takeaways, and offering our interpretation of the guidance. This analysis is informed by a combination of collective industry insights and regulatory experiences.  

 

1. Define the Question of Interest

The first step is to articulate the specific question the AI model aims to address. The question of interest could be any potential drug development question and should not be constrained by the types of models. The guidance provides two hypothetical examples of questions of interest. You may find that those questions could be perfectly applicable for a semi-mechanistic pharmacokinetic / pharmacodynamic model. Real examples of questions of interest could be found in previous publications (3) and NDA / BLA reviews (5) 

 

2. Define the Context of Use (COU) for the AI model

The COU specifies the AI model’s purpose and boundaries. It describes what the model is intended to do, how its outputs will be used, and whether other evidence will complement its predictions. For instance, an AI model used in manufacturing to assess vial fill levels might supplement, but not replace, traditional quality control methods. These sources of evidence should be stated when describing the AI model’s COU in step 2 and are relevant when determining the model influence in step 3. 

 

3. Assess the AI Model Risk

Model risk is assessed based on two factors: 

  • Model influence: The extent to which the AI model’s output informs decisions. 
  • Decision consequence: The potential impact of incorrect decisions based on the model’s output. 

For example, in clinical settings, a high-risk model might directly determine patient monitoring protocols for a serious adverse event, making its accuracy and reliability critical. 

Although the scale for model influence and decision consequence is not clearly defined in the current guidance, a three-level (low, medium, and high) score is defined in the ICH M15 MIDD guideline (2). 

Performing the model risk assessment is a milestone in the risk-based credibility assessment framework, as this outcome directly impacts the model performance acceptance criteria which will be laid out in the credibility assessment plan.  

  

4. Develop a Credibility Assessment Plan

This step involves crafting a detailed plan to evaluate the AI model’s credibility. Credibility assessment activities should be based on the question of interest, the COU, and model risk.  Key elements of the plan include: 

  • Model and Model Development Process Description: Define the model’s inputs, outputs, architecture, and features. 
  • Data Description: Ensure datasets used for training and testing are reliable, relevant, and representative of the target population or process. 
  • Evaluation Process and Metrics: Employ appropriate performance measures such as sensitivity, specificity, and confidence intervals. 

This section of the guidance provides a detailed outline of a potential modeling and simulation plan. We recommend the sponsors work with the regulatory agency to be in agreement on model risk assessment and the credibility assessment plan early and prior to execution. It is also encouraged that the sponsor discusses the timelines of execution and reporting with the regulatory agency at this stage.  

 

5. Execute the Plan

Sponsors are encouraged to engage with the FDA to ensure the credibility plan aligns with regulatory expectations. Execution should follow the predefined steps, while addressing any unforeseen challenges and motivating and documenting deviations from the plan. 

Of note, the FDA has not provided guidance regarding how the regulatory agency will be involved / monitoring / inspecting the execution, especially for high-risk cases.  

 

6. Document Results and Deviations

All results from the credibility assessment should be documented in a “credibility assessment report.” This report should include findings, justifications for any deviations, and insights into model performance.  

In general, in addition to the credibility assessment report, all the associated modeling files / scripts that were used to generate the outputs should also be submitted to the FDA so that the reviewers could replicate the key simulations. For newer modeling and simulation tools, there might be unforeseen circumstances where the submission could be delayed. The guidance indicates that “submission of the credibility assessment report should be discussed with the FDA.” We strongly recommend that sponsors closely engage with the Agency on the submission activities.  

 

7. Determine Model Adequacy

The final step evaluates whether the AI model’s credibility is sufficient for its intended COU. If inadequacies are identified, sponsors may need to refine the model, gather additional data, or alter its application. 

In this section, the FDA offers a few options for sponsors once a model credibility is deemed insufficiently established for the model risk. We encourage the sponsors actively explore these options during the execution phase rather than waiting till the last step. 

 

Special Considerations: Life Cycle Maintenance

AI models are not static. They evolve over time as new data and insights become available. The FDA emphasizes the importance of life cycle maintenance to ensure models remain fit for purpose throughout their deployment. This involves: 

  • Monitoring model performance over time 
  • Identifying and addressing data drift or performance degradation 
  • Implementing changes as needed and assessing their impact 

For instance, in pharmaceutical manufacturing, changes to production processes or data inputs might necessitate retraining or reevaluating the AI model to maintain its accuracy and reliability. 

A life cycle management plan for the AI model could be included in the marketing application to proactively obtain feedback from the Agency. 

 

Data Integrity and Transparency 

One of the most critical aspects of AI model credibility is the quality and management of data. The FDA highlights the need for: 

  • Relevance and Representativeness: Datasets should include key variables and represent the target population or manufacturing process. 
  • Reliability: Data should be accurate, complete, and traceable. 
  • Bias Mitigation: Sponsors must identify potential sources of algorithmic bias and implement strategies to address them. 

Additionally, transparency in model development and evaluation is essential. This includes documenting how data were collected, processed, and used, as well as providing a clear rationale for model design choices. 

 

Engaging with the FDA Early – and Engaging with Simulations Plus (SLP) Early!

Early and proactive engagement with the FDA is strongly encouraged. Sponsors can leverage formal meetings and specialized programs to discuss AI models and their regulatory implications. Examples of engagement options include: 

  • Program meetings: such as pre-IND (Investigational New Drug), IND, EOP (end of phase) milestone meetings, pre-NDA, pre-BLA meetings. 
  • Other engagement options: such as MIDD paired meeting program, real-world evidence (RWD) program, etc. 

The program meetings are dedicated to a specific program and could cover all aspects of the program development including but not limited to preclinical, clinical, clinical pharmacology, CMC (chemistry, manufacturing, and controls), biopharmaceutics, and regulatory questions. Each meeting is generally designed to be an hour long, and therefore, the number of questions for FDA included in each meeting package should be limited.  

Other engagement options, such as the MIDD meeting, might provide opportunities for a more in-depth discussion about the AI model. The sponsor should evaluate program timelines, specific questions to the FDA, and aspects other than the AI model to select an appropriate mechanism for interaction with the FDA on AI models. 

The FDA has stressed in several places in the draft guidance that they would like sponsors to meet with them early in the AI model development process. Therefore, for sponsors who seek external collaborations and assistance to develop an AI model, it is also critical to engage with the RS COE at Simulations Plus, early in the process.  Partnering with Simulations Plus offers the advantage of working with an organization that has a well-established reputation and a strong track record of collaboration with regulatory agencies. With 20-25 years of experience in applying machine learning/AI across key areas such as ADME property prediction, AI-driven drug design, high-throughput PBPK and QSP, Simulation Plus brings deep expertise to the table.  

Our extensive experience positions us as a valuable partner in facilitating and supporting  regulatory interactions around AI model implementation. Our proficiency in navigating regulatory frameworks ensures that AI tools are integrated into drug development processes in compliance with the guidance. To achieve this, it is critical that the regulatory agencies are provided with full transparency with respect to the elements outlined above  to effectively support the selected strategy.     

 

Practical Examples of AI Use 

The FDA’s guidance provides several illustrative examples of AI applications in pharmaceutical development: 

  • Clinical Development: An AI model predicts patient risk for drug-related adverse events, enabling tailored monitoring strategies. 
  • Manufacturing: An AI-based visual analysis system identifies deviations in vial fill volumes, complementing traditional quality control methods. 

These examples highlight the diverse potential of AI to enhance decision-making across the drug lifecycle, from clinical trials to post-marketing surveillance.  

 

Implications for Industry Professionals

The FDA’s draft guidance marks a significant step toward integrating AI into pharmaceutical development. For scientists and industry stakeholders, this guidance underscores the importance of: 

  • Establishing robust processes to assess and document AI model credibility. 
  • Prioritizing data integrity and transparency (such as providing detailed AI model development methods and processes). 
  • Engaging with regulators early and often to navigate the evolving AI landscape via various channels (such as program meetings, or special venues as mentioned above). 

By adhering to these principles, the pharmaceutical industry can harness the full potential of AI while ensuring safety, effectiveness, and quality for drugs. 

 

Conclusion

Artificial intelligence holds significant promise for transforming pharmaceutical development. However, its effective implementation requires a careful balance of innovation and regulatory rigor. The FDA’s draft guidance provides a roadmap for navigating this complex terrain, emphasizing a risk-based approach to establishing AI credibility and maintaining performance over time. 

For industry professionals, this guidance is not just a regulatory requirement but an opportunity to lead the way in developing safe, effective, and innovative AI-driven solutions. Specifically, with the right approach, pharmaceutical companies can not only drive innovation but also build trust and transparency with regulators, ultimately advancing the adoption of AI to improve health outcomes.  

If your organization is interested in incorporating AI into its drug development programs, The RS COE at Simulations Plus is equipped with the expertise to provide guidance and support. Learn more about how we can help. 

 

References: 

  1. FDA. Guidance for Industry and Other Interested Parties: Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. Available from: https://www.fda.gov/media/184830/download.
  2. ICH. ICH M15 Guideline on general principles for model-informed drug development. Available from: https://www.fda.gov/media/184747/download.
  3. Kuemmel C, Yang Y, Zhang X, Florian J, Zhu H, Tegenge M, Huang SM, Wang Y, Morrison T, Zineh I. Consideration of a Credibility Assessment Framework in Model-Informed Drug Development: Potential Application to Physiologically-Based Pharmacokinetic Modeling and Simulation. CPT Pharmacometrics Syst Pharmacol. 2020;9(1):21-28.
  4. Bai JPF, Liu G, Zhao M, Wang J, Xiong Y, Truong T, Earp JC, Yang Y, Liu J, Zhu H, Burckart GJ. Landscape of regulatory quantitative systems pharmacology submissions to the U.S. Food and Drug Administration: An update report. CPT Pharmacometrics Syst Pharmacol. 2024;13(12):2102-2110.
  5. FDA. NDA/BLA Multi-disciplinary Review and Evaluation for NDA 216490 Yorvipath (Palopegteriparatide). Available from: https://www.accessdata.fda.gov/drugsatfda_docs/nda/2024/216490Orig1s000MultidisciplineR.pdf.
  6. FDA. Artificial Intelligence for Drug Development | FDA Available from: https://www.fda.gov/about-fda/center-drug-evaluation-and-research-cder/artificial-intelligence-drug-development.
  7. FDA. Artificial Intelligence and Machine Learning in Software as a Medical Device. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device.