top of page

ISPE AI Guide: How to Validate Artificial Intelligence in a Pharma Environment?

Artificial intelligence is playing an increasingly important role in the pharmaceutical industry and life sciences. Document analysis, information extraction, decision support, signal detection, data processing automation: use cases are multiplying. But in a GxP environment, one question immediately arises: how can these technologies be integrated while remaining compliant, controlled and auditable?


The ISPE guide dedicated to artificial intelligence provides a structured framework to address this question. It helps position AI not only as a technological topic, but also as a matter of quality, governance, risk management and validation.


This article is based on the first introductory webinar on the ISPE AI Guide, dedicated to the fundamentals of validating artificial intelligence in a Pharma environment.



Why the ISPE AI Guide Matters

AI generates a great deal of interest, but also a great deal of confusion. Between technological promises, business expectations and regulatory constraints, companies in the Pharma sector need a clear framework to move forward pragmatically.

The ISPE guide provides a reading grid adapted to regulated environments. It is not only about algorithms or technical performance, but about placing AI within a logic of quality, governance, risk management and validation.

The objective is therefore not to deploy AI simply “to appear innovative”, but to implement systems that are genuinely reliable, controlled and fit for their intended use.

A Regulatory Framework Still in Evolution

To date, there is not yet a Pharma regulation fully dedicated to AI. However, existing frameworks continue to apply, including EU Annex 11 and 21 CFR Part 11.

The absence of a specific text does not mean the absence of requirements. Manufacturers must already demonstrate that their systems remain controlled, that data is relevant, that performance is monitored, and that risks are assessed.

Several FDA and EMA publications and initiatives also help clarify regulatory expectations. The joint FDA-EMA publication released in January 2026 around ten key good practice principles for AI in drug development confirms this direction.

The message from the authorities is clear: AI must be governed seriously, especially when it is involved in critical processes.

What AI Changes in the Validation Approach

One of the guide’s key contributions is to show that historical GAMP principles remain valid, but must be expanded to account for the specific characteristics of AI.

A system integrating AI is not simply a conventional software application with an interface and deterministic rules. It may include a model, datasets, pre-processing and post-processing mechanisms, learning logic, and behaviours that may be more or less dynamic over time.

This requires broadening the validation perspective across several dimensions: data quality, model governance, traceability, explainability, operational monitoring and change management.

Data Becomes a Central Pillar

Data quality lies at the heart of AI system quality.

Data must be fit for its intended use, which implies several essential characteristics: it must be reliable, relevant, representative and sufficient.

This represents an important shift for many organisations. In a traditional system, much of the expected behaviour is carried by the application logic. In an AI-based system, performance also directly depends on the quality of the data used to train, test, validate or feed the model.

In practice, an AI project in a regulated environment cannot be approached without a genuine data management strategy.

Validating an AI Model Is Not the Same as Validating Conventional Software

The lifecycle of an AI model is not identical to that of conventional software. It often relies on iterative experimentation, with successive loops of model selection, data engineering, tuning, evaluation and improvement.

This logic is essential to understand. In traditional software development, the objective is usually to build and verify features. In model development, the objective is to achieve an acceptable level of performance against defined indicators, based on given data and within a given context.

Validation must therefore integrate this reality. It is no longer enough to verify that a function works; it is also necessary to demonstrate that the model is relevant, that its performance is understood, that its limitations are known, and that its behaviour remains acceptable within the intended context of use.

Static or Dynamic: An Essential Distinction

The distinction between static and dynamic systems has a direct impact on the validation strategy and on the ability to maintain control over time.

A static system relies on a model that is fixed at the time it is released into production. Any evolution then goes through a formal change process.

By contrast, a dynamic system can evolve over time. This requires reinforced monitoring, predefined metrics, tolerance thresholds and a much more robust governance framework.

At present, static models remain the most commonly deployed, particularly because they are easier to control from a data, infrastructure and validation effort perspective.

Data and Model Governance: A Prerequisite

AI cannot be addressed without governance.

There are many dimensions to control: roles and responsibilities, standards, data and model lifecycle management, traceability, access security, incident management, auditability and monitoring metrics.

A compliant AI system is therefore not only a high-performing system. It is a system for which the organisation knows:

  • who is responsible for it;

  • which data it uses;

  • how it has been configured;

  • how it is monitored;

  • how changes are managed;

  • how deviations are detected and handled.

This ability to document, explain and manage the system over time is what truly determines its credibility in a regulated environment.

AI Literacy and Knowledge Management: Strategic Challenges

AI validation does not rely solely on technology. It also requires organisations to develop their skills and improve knowledge management.

This point is often underestimated. Many projects fail or become fragile not because the technology is poor, but because the organisation does not have the level of understanding required to ask the right questions, frame use cases properly and make the right decisions.

To be controlled, AI must be supported by a strong dialogue between business, quality, validation, data science, IT and compliance teams. This cross-functional approach becomes a condition for success.

LLMs, RAG and AI Agents: New Use Cases, New Risks

Foundation models, LLMs and RAG approaches open the door to more complex systems, including forms of agentic AI capable of supporting research, analysis or knowledge management processes.

But these technologies also introduce specific challenges: difficulties integrating them into a controlled software architecture, variable response quality, lack of reliability, bias risks, difficulty understanding model behaviour, and additional complexity when fine-tuning is implemented.

For regulated companies, innovation must therefore go hand in hand with a clear risk analysis and a validation strategy adapted to the true nature of the system.

Key Takeaways

AI validation in Pharma is not a theoretical topic: it is already an operational challenge.

Companies cannot wait for a perfect framework before taking action. They must start structuring their approach now around several key principles:

  • a clear understanding of the intended use;

  • rigorous attention to data;

  • strong governance;

  • a clear distinction between static and dynamic models;

  • adapted risk management;

  • upskilling of teams.

The ISPE AI guide provides a valuable foundation for building this approach in a coherent and pragmatic way.

For Pharma and MedTech stakeholders, the message is clear: artificial intelligence should not be treated as a simple innovation topic, but as a true matter of quality, compliance and control of computerized systems.

At ADN, this is precisely the balance we support every day: helping organisations connect innovation with regulatory expectations and control of computerized systems.


To go further This topic was also covered in a dedicated webinar on the ISPE AI guide and the validation of artificial intelligence in a Pharma environment. 👉 Watch the replay of the first webinar dedicated to the ISPE Guide on AI

In the next articles, we will explore three key dimensions of the ISPE AI guide in more detail: the lifecycle approach for systems integrating AI, risk and supplier management, and the challenges of governance and AI culture within Pharma and MedTech organisations.

Comments


bottom of page