User Requirements for AI in GxP: Designing, Selecting, and Validating AI/ML Software with Confidence

February 23, 2026

Programmer using AI for software development

Artificial Intelligence and Machine Learning are rapidly moving from "innovation pilots" to everyday tools in regulated life sciences organizations, supporting everything from deviation trending and complaint analysis to document drafting, image recognition, and clinical insights.

But as AI becomes more embedded in workflows, the compliance question changes from "Can we use it?" to "How do we prove it's fit for intended use, and stays that way?"

Across FDA, MHRA, and EMA expectations, the answer begins with a familiar foundation: clear User Requirements (URS) and a validation approach appropriate for the solution's risk, intended use, and lifecycle behavior.

AI doesn't replace traditional quality principles; it tests whether we've fully applied them.

Why User Requirements Matter More with AI/ML

User Requirements are often treated as a "check-the-box" precursor to vendor selection or validation execution. For AI/ML-enabled software, URS becomes even more important because technology introduces new challenges:

  • Outputs may be probabilistic rather than deterministic
  • Performance can vary based on data inputs
  • Models may change over time (or may appear to)
  • Human interaction becomes part of the control strategy
  • System behavior can be difficult to explain without transparency

Regulators generally expect that organizations understand and control these characteristics, especially when the tool supports or influences:

  • GxP decision-making
  • product quality
  • patient safety
  • data integrity
  • compliance outcomes

If URS and intended use are vague, AI/ML validation will almost always be insufficient—and operational risks multiply after go-live.

Where FDA, MHRA, and EMA Expectations Converge

While the FDA, MHRA, and EMA may publish guidance in different formats and with different emphasis, their expectations converge strongly around these principles:

1. Intended Use Drives Everything

The single most important compliance question is:

What is the AI system being used for, by whom, and in what process?

AI used for drafting a SOP summary has a different risk profile than AI used to triage deviations, classify adverse events, or support release decisions.

2. Risk-Based Validation Still Applies, But Must Be Expanded

Traditional CSV (Computer System Validation) approaches still apply (risk assessment, URS, IQ/OQ/PQ, traceability), but AI adds validation dimensions like:

  • model performance requirements
  • robustness
  • drift detection
  • bias considerations
  • explainability where relevant

3. Data Integrity and Governance Must Include the "Model Pipeline"

If the model uses incomplete, biased, or poorly governed data, the AI output becomes unreliable, even if the software is "validated."

4. Human Oversights Matter

Regulators increasingly care about whether users interact with AI correctly, interpret results appropriately, and understand limitations. This is often missed, and it's one of the biggest real-world compliance risks.

Validation and Control Considerations When Designing or Selecting AI/ML Software

Whether you're building internally or selecting a vendor solution, here are the core controls QA teams should evaluate and embed early:

Pitfalls QA Teams Commonly See in AI/ML Implementations

Here are recurring pitfalls that slow implementations and increase compliance risks:

  1. Vague URS or no URS - Without clear requirements, it's impossible to define acceptance criteria or validate intended use.
  2. Overreliance on Vendor Claims - "Validated," "GxP-ready," and "compliant" are not outcomes; they're marketing terms unless backed by evidence you can leverage and control.
  3. Treating AI Like Traditional Deterministic Software - If you validate only that the software runs, not that the AI behaves as expected across representative cases, you haven't validated what matters.
  4. Lack of Monitoring and Drift Controls - Without performance monitoring and drift detection, a tool that was fit yesterday may not be fit six months from now.
  5. No Controls for Human Use If users can interpret or apply outputs inconsistently, your compliance risk increases, even if your technical documentation is strong.

AI/ML User Requirements and Validation Elements

Below is a high-level checklist QA teams can use when developing URS, validation plans, and control strategies:

Category

What “Good” Looks Like (Evidence / Output)

Intended Use & Scope
  • Written intended use statement with GxP classification (e.g., direct/indirect impact) and documented rationale.

  • Workflow map showing where AI is used and whether it informs, recommends, or automates decisions.

  • Clear list of what the system does/does not do, acceptable use cases, and exclusion criteria

  • Role matrix + access control requirements aligned to least privilege and segregation of duties

Requirements & Risk
  • URS includes AI behavior, confidence thresholds, human review expectations, audit trails, and data handling
  • Documented risk assessment linked to intended use and validation approach
  • Defined which outputs become quality records, retention rules, and review/approval requirements
Model & Performance
  • Defined KPIs (accuracy, recall, precision, specificity, etc.) and objective acceptance thresholds
  • The test plan includes normal cases, rare cases, stress cases, and failure mode scenarios
  • Model version history, release documentation, and the ability to identify which version produced an output
  • Documented rationale for explainability expectations; evidence of interpretability where required
Data Governance
  • Data lineage documented, access controlled, and data integrity controls applied
  • Documented pipeline, transformation checks, and evidence that preprocessing is consistent and validated
  • Security assessments, data encryption, access controls, and privacy impact evaluation
  • Audit logs show who did what, when, what the AI returned, confidence, and what decision was made
Validation Strategy
  • The Validation plan defines the strategy (IQ/OQ/PQ or equivalent), risk-based testing depth, and rationale
  • Requirements traceability matrix (RTM) linking URS to test scripts and results
  • Negative testing demonstrates safe failures, predictable behavior, and controlled outcomes
  • SOPs define how AI outputs are used; training and competency evidence exist
Operations & Lifecycle
  • Change control SOP and process includes model retraining, configuration adjustments, and rollback strategy
  • Defined drift metrics, monitoring cadence, alert thresholds, and response plan
  • Formal review schedule and documented triggers for investigation, CAPA, and re-validation
  • Defined offboarding process, data retention, and audit evidence preservation
Supplier Quality
  • Vendor documentation reviewed (development methodology, training data governance, validation evidence)
  • Agreement includes notification timelines, change classification, regression evidence expectations
  • Incident processes aligned to deviation management, escalation, and response SLAs
  • Audit-ready package: User Requirement Specifications (URS), Risk Assessment (RA), validation evidence, supplier evidence, monitoring records, SOPs

 

How Third-Party Support Can Accelerate AI/ML Success (Without Slowing Innovation)

Many organizations discover that AI/ML initiatives move quickly until validation, governance, and inspection readiness concerns arise. That's where a third-party partner can significantly reduce friction.

A consulting partner with deep GxP and regulatory expertise, such as ProPharma, can help teams:

  • Translate AI functionality into compliant URS and intended-use definitions
  • Build risk-based validation strategies aligned to FDA/MHRA/EMA expectations
  • Establish lifecycle governance, monitoring, and drift control frameworks
  • Qualify vendors and develop supplier quality agreements with AI-specific coverage
  • Design SOPs and training programs that address real-world user interaction risks
  • Prepare documentation and evidence that stands up to audits and inspections

The benefit is not just compliance, it's speed with confidence, because quality expectations are integrated early rather than retrofitted after implementation.

Closing Thoughts

AI/ML-enabled software can deliver meaningful efficiency, improved trending insights, and enhanced decision support, but only when it is implemented with clear requirements, controlled intended use, robust validation, and lifecycle governance.

In many ways, the compliance question isn't whether AI is allowed. It's whether your organization can demonstrate:

  • The system is fit for intended use
  • Users interact with it in a controlled, qualified manner
  • Outputs are reliable and traceable
  • Changes are governed and performance is monitored

That's the standard regulators will continue to expect, regardless of how fast AI evolves.

TAGS:

Checking vitals on a laptop

February 2, 2026

AI in PV Surveillance: Aligning Innovation with Regulatory Expectations

Artificial Intelligence (AI) is moving quickly from pilot projects into routine pharmacovigilance (PV) operations. The 2025 CIOMS Working Group XIV report on AI in pharmacovigilance provides a...

How to Implement an Effective Audit Trail

Maintaining an audit trail is a regulatory compliance requirement, but what makes an audit trail beneficial for maintaining effectiveness and complying with regulations? This blog will explain what...

December 15, 2025

Annex 11 2011 Version vs. Annex 11 2025 Draft Version: What are the Differences and Enhancements?

The GMP/GDP Inspectors Working Group and the PIC/S Committee jointly recommended that the current version of Annex 11 on Computerised Systems be revised to reflect changes in regulatory and...