PDA Letter Article

Navigating the AI Landscape PDA and PQRI Host Inaugural Workshop on Artificial Intelligence in Drug Development

Peter Makowenskyj, MEng, G-CON Manufacturing and Toni Manzano, PhD, Aizon

This September, PDA and the Product Quality Research Institute (PQRI) hosted the Advancing Artificial Intelligence in the Pharmaceutical Industry Workshop 2025, held on 10–11 September at the Westin Washington, DC Downtown. Themed “Navigating the AI Landscape: Your Roadmap for Success,” the workshop offered a deep dive into how artificial intelligence (AI) is transforming the pharmaceutical sector—from drug discovery and development to biomanufacturing and real-time release strategies.

Building the Roadmap for AI in Pharma

Vector illustration of a halved brain with digital circuitry laying on top of itThe two-day event was designed around an interactive roadmap that guided participants from strategy development through validation and oversight. Each session blended thought-leadership presentations with case studies, group discussions, and hands-on exercises, ensuring attendees gain both conceptual insights and practical skills.

Day One began with AI: Let’s Get Down to Basics, an introductory session led by Peter Makowenskyj (G-CON) and Toni Manzano (Aizon). Participants explored AI’s fundamentals and engaged in a strategic decision-making exercise focused on AI-enabled Continued Process Verification (CPV) as a pathway to Real-Time Release (RTR). This scenario-based session highlighted both the opportunities and regulatory challenges of embedding AI into Good Manufacturing Practice (GMP) environments. The day concluded with a networking reception, offering opportunities to connect with peers and industry leaders.

Day Two builds on this foundation with four dynamic sessions:

  • Discover AI! Moderated by Kir Henrici (The Henrici Group), invited participants to identify and design AI projects that tackle pressing pharmaceutical challenges.
  • AI Strategy in Biomanufacturing, led by Sara Cook (IliaCook Consulting) and Ben Stevens (GSK), examined real-world implementation challenges and regulatory considerations through case studies and group exercises.
  • Demystifying AI: Hands-On Learning for GxP Beginners, moderated by Peter Makowenskyj, and led by Toni Manzano, gives attendees practical experience in building a simple machine learning model, tailored to regulated pharma environments. Attendees spent most of the time during this session reviewing data and spending a day in the life of a data scientist and AI Programmer to solve a complex problem.
  • AI in Pharma – Validation and Oversight, led by Kir Henrici with contributions from Stephen Ferrell (Valkit.ai) and Krishna Ghosh (Veritas Compliance), explored the distinctions between traditional system validation and AI model validation, looking at similarities and differences between the European Union and U.S. draft regulations, and closed with a discussion on the U.S. Food and Drug Administration FDA’s credibility framework.

Collaboration, Regulation, and Innovation

At every stage, the workshop emphasized collaborative learning, regulatory readiness, and practical application. With the pharmaceutical industry under increasing pressure to deliver safe, effective medicines faster, AI represents both a transformative opportunity and a complex challenge. This workshop provides a roadmap for navigating that balance, equipping attendees with the insights and tools needed to responsibly implement AI across the product lifecycle.

Building a Roadmap for AI implementation in Pharma Manufacturing

As a key outcome of the PDA/PQRI Workshop on AI, participants were challenged to design practical roadmaps for introducing AI into pharmaceutical production. One team developed a comprehensive framework that integrates business, risk, regulatory, and operational perspectives, outlining a path for responsibly and effectively embedding AI into GMP environments.

Strategy Definition & Business CaseData Governance and Infrastructure ReadinessAI Use Case Scoping and Risk AssessmentData Preparation and Model DevelopmentModel Validation and QualificationRegulatory EngagementDeployment, Integration, and Change ControlMonitoring, Maintenance, and Lifecycle ManagementContinuous Improvement and Scalability
Identify operational challenges and opportunities where AI can add value (e.g., predictive maintenance, real-time process monitoring, anomaly detection, batch release optimization).Map all data sources (process data, QC test data, eBR, LIMS, PAT, ERP systems).Define the context of use (CoU).Aggregate and clean historical process and QC data.Apply a GAMP 5-aligned AI validation framework:For high-impact models (e.g., real-time release, batch approval decision-support), plan early engagement with regulators (FDA Emerging Technology Program, EMA’s AIIntegrate AI models into manufacturing control systems, PAT platforms, or QC workflows.Define and implement real-time model monitoring dashboards.Conduct post-implementation review of AI models’ operational performance and business impact.
Define the business objectives (efficiency, quality consistency, resource optimization, yield improvement).Ensure data integrity practices (ALCOA++) and assess data quality, completeness, and historical coverage.Perform a risk assessment (based on GAMP 5 second edition, FDA’s risk-based AI model credibility framework).Conduct exploratory data analysis (EDA) to understand variability and patterns.URS, FS, DS documentation.Prepare validation and lifecycle management documentation for submission or inspection readiness.Conduct user training and procedural updates.Set up drift detection and model performance KPIs.Capture learnings and refine risk assessment and validation frameworks.
Assess regulatory constraints and readiness (ICH Q8–Q12, GMP Annex 11, FDA AI guidance, EMA AI reflection paper).Define data governance policies: data access, security, privacy (GDPR, HIPAA if applicable), retention, and audit trails.Define critical data elements and model output impact on product quality, patient safety, and compliance.Split data for training, validation, and testing; consider synthetic or augmented data where gaps exist.Installation Qualification (IQ), Operational Qualification (OQ), Performance Qualification (PQ).Ensure AI-generated outputs are integrated into eBR/eDMS/eLIMS systems as GxP records per 21 CFR Part 11 and Annex 11.Implement change control processes for AI model deployment, updates, and retraining.Plan for requalification or retraining triggers (e.g., process changes, raw material source shifts).Identify additional AI opportunities and plan scalable multi-use case roadmaps (e.g., AI-enabled CPV, predictive environmental monitoring, yield optimization).
Select and prioritize use cases based on business impact, feasibility, and risk.Establish cloud/on-premise infrastructure or hybrid environments for AI model development and deployment.Identify validation requirements for AI models (including explainability, auditability, bias management).Select suitable AI/ML techniques (e.g., anomaly detection, supervised classification, multivariate control).Define model acceptance criteria: accuracy, robustness, reproducibility, explainability. Align with cybersecurity and data protection regulations.Maintain comprehensive model lifecycle documentation (version control, updates, change history). 
 Plan for cybersecurity frameworks (aligned with NIST, ISO/IEC 27001). Document algorithm development, hyperparameter tuning, and rationale.Test model on edge cases and unseen operational scenarios.  Regularly review data quality and regulatory compliance. 
   Ensure traceability of all AI development activities (data lineage, model versioning, training logs).Document risk mitigations for data drift, model bias, and failure modes.    

Business Case and Strategic Value

The team began by framing the business case for AI adoption: delivering better, faster, cheaper outcomes in pharmaceutical operations. The chosen use case, a quality maturity index (QMI), exemplified high strategic value by enabling real-time monitoring of the quality management system. Such a system can support real-time release testing (RTRT), strengthen continued process verification (CPV), and automate deviation management, training, and investigations. While the potential rewards are significant, ranging from earlier deviation detection to improved efficiency, the group emphasized that the efficiency gains must always be balanced with assurance of compliance and product quality.

Risk Evaluation and Lifecycle Management

AI use cases inherently carry new forms of risk. The group mapped these across the lifecycle of model deployment:

  • Model input and output validation: ensuring data integrity, interpretability, and traceability were considered non-negotiable. Poor inputs or opaque outputs could undermine trust and compliance.
  • Continuous optimization risks: the roadmap highlighted the dangers of data drift and model drift, which may cause outputs to deviate from validated states or regulatory expectations. Continuous monitoring, along with predefined boundaries and alerts, was proposed to mitigate this.
  • Lifecycle considerations: AI should be treated as a qualified system component, with ongoing validation of its relevance and performance. Importantly, while the model may provide insights, decision-making remains governed by established QMS processes, ensuring AI augments rather than replace human oversight.

Regulatory Alignment

From a regulatory standpoint, the roadmap underscored the need for early and transparent engagement with health authorities. Teams recommended maintaining clear documentation throughout model development, from intended use and algorithm justification to change control records and audit trails. Regulatory agencies are increasingly open to adaptive frameworks, but traceability of decisions and compliance with validated design spaces remain essential for safeguards.

The roadmap also explores how regulatory acceptance will likely evolve. While AI may reduce reliance on traditional testing in the future, the group acknowledged that traditional assays and validation steps will remain necessary during early adoption to provide a safety net. Over time, evidence generated through well-controlled AI deployments could support broader regulatory confidence in AI-driven RTR strategies.

Deployment, Integration, and Scalability

Operationally, integration with site quality systems and adherence to change control protocols were identified as critical. While initial infrastructure maturity was considered low, requiring significant internal development to guarantee data protection and accountability, the long-term benefits of scalable AI models were seen as substantial. Early applications, even if limited in operational control, can generate trust, build organizational skills, and lay the foundation for future automation and control-level AI systems.

A Balanced, Phased Approach

The roadmap produced by the workshop group provides a phased, risk-aware pathway for AI adoption in pharmaceutical manufacturing. It balances strategic ambition with regulatory caution, highlighting the need for:

  • Robust business cases linked to quality outcomes,
  • Rigorous risk assessment and lifecycle management,
  • Transparent engagement with regulators, and
  • Scalable integration into existing GMP frameworks.

By framing AI not as a disruptive replacement but as an augmentative tool within the established quality ecosystem, this roadmap positions pharmaceutical manufacturers to advance toward real-time release and digitalized operations, while maintaining the industry’s uncompromising standards for patient safety and product efficacy.