The FDA's April 2026 Warning Letter to Purolea Cosmetics Lab is a notable milestone for the pharmaceutical industry. While there have been prior Warning Letters involving products that incorporate AI, this case may represent one of the first enforcement actions specifically citing the use of AI itself in pharmaceutical manufacturing operations as non-compliant.
This distinction matters.
It signals that FDA expectations around AI are no longer theoretical, they are being enforced under existing CGMP frameworks.
What Happened: More Than a Quality System Failure
At a foundational level, the FDA cited familiar cGMP violations, including failures of the Quality Unit (QU) under 21 CFR 211.22:
- Procedures were not adequately established or followed
- Batch records were not reviewed prior to release
- Process validation was not performed
- Production and process controls were inadequate
However, the differentiator in this case was the firm's reliance on AI tools to:
- Generate specifications
- Draft procedures
- Create master production and control records
Most critically, the firm failed to independently verify these outputs. The FDA highlighted a striking example:
The firm was unaware that process validation was required because the AI system "never told" them.
This statement underscores the central compliance failure, not the use of AI, but the absence of governance, oversight, and validation around its use.
The Real Issue: Absence of AI Governance
The FDA's position is clear and consistent with existing regulations:
- AI-generated outputs must be reviewed and approved by the Quality Unit
- Responsibility for compliance cannot be delegated to an AI system
- All systems impacting product quality must be controlled, qualified, and monitored
In this case, AI was effectively treated as an authoritative source rather than a GxP-relevant system requiring validation and oversight.
Key Pitfalls Highlighted by This Case
1. Treating AI as a Regulatory Expert
AI systems do not "know" regulations, they generate outputs based on predicted patterns. Assuming completeness or correctness without verification introduces significant risk.
2. Lack of Risk-Based Evaluation of AI Use
Not all AI use cases carry the same risk. Generating SOP templates is very different from defining product specifications or control strategies. The absence of structured risk assessment led to inappropriate reliance.
3. No Qualification or Validation of AI Systems
AI tools were used in GxP-relevant processes without evidence of:
- Intended use definition
- Risk assessment
- Qualification activities
This is analogous to using unvalidated software in manufacturing.
4. Missing Oversight and Monitoring Mechanisms
There were no controls in place to:
- Review AI outputs consistently
- Detect errors or omissions
- Escalate risks to the Quality Unit
5. Erosion of Core CGMP Knowledge
AI should assist human expertise, not replace it. Foundational requirements like process validation must remain embedded within the organization's governance framework.
How This Could Have Been Prevented: A CSV/DI Perspective
From a Computer System Validation (CSV) and Data Integrity (DI) standpoint, this Warning Letter reflects a lack of structured controls that are already well-established for GxP systems.
A robust, risk-based AI compliance approach would have included:
1. Risk Assessment of AI Use Cases
Before deployment, each AI application should be evaluated based on:
- Severity of failure: What is the impact if the AI output is incorrect?
- Likelihood of failure: How reliable is the underlying AI model and technology?
- Detectability: What is the probability that an error would be identified before impacting product quality or patient safety?
This type of structured assessment ensures that higher-risk use cases (e.g., specifications, control strategies) receive proportionally higher controls.
2. Implementation of Risk Mitigation Strategies
Based on the risk profile, appropriate controls should be established, including:
- Testing of predictable aspects of the AI model
- Verifying outputs against known regulatory requirements
- Challenging the system with edge cases and known failure scenarios
- Monitoring of non-predictable aspects
- Establishing SOPs for human-in-the-loop (HITL) review
- Defining periodic review processes where continuous oversight is not feasible
- Scaling monitoring rigor based on risk level
These controls ensure that both deterministic and non-deterministic behaviors of AI are appropriately managed.
3. Governance and Lifecycle Management SOPs
AI systems require ongoing governance similar to any validated system, including:
- Change management
- Updates to AI models
- Changes in training data
- Modifications to integrated software platforms
- Data governance
- Control over training and reference data sources
- Traceability and versioning
- Procedural oversight
- Defined roles and responsibilities for AI use
- Quality Unit approval workflows
- Documentation standards
Without these controls, AI systems can drift over time, introducing new and unmonitored risks.
What FDA Is Signaling to Industry
This Warning Letter reinforces several critical points:
- Existing CGMP regulations already apply to AI-enabled processes
- The Quality Unit remains fully accountable for all outputs
- AI must be treated as a GxP system, not a productivity shortcut
Importantly, FDA is not discouraging AI adoption, it is requiring that it be implemented within a controlled, risk-based framework.
Turning Compliance into a Strategic Advantage
Organizations that approach AI with the right level of rigor can unlock significant value, including:
- Accelerated documentation processes
- Improved consistency in quality systems
- Enhanced decision support
However, these benefits are only sustainable when paired with structured governance, validation, and monitoring.
How ProPharma Supports Compliant AI Adoption
ProPharma's QA/AI & ML Compliance Services' Quality, CSV, and Data Integrity experts help organizations implement AI in a way that is both innovative and inspection-ready.
Our capabilities include:
- Development of risk-based AI governance frameworks
- AI system selection and suitability assessments for GxP use
- Qualification and validation aligned with regulatory expectations
- Design and implementation of monitoring and lifecycle management programs
- Establishment of governance, change control, and oversight SOPs
Whether you're building internally or selecting a vendor solution, here are the core controls QA teams should evaluate and embed early:
By embedding these controls early, organizations can avoid the pitfalls highlighted in this case and confidently scale AI across their operations.
Closing Thoughts
The Purolea Warning Letter is not just a cautionary tale; it is a clear regulatory signal.
AI is here, and FDA expects it to be governed.
Organizations that treat AI as a controlled, risk-based system will move forward successfully. Those that rely on it without oversight risk not only compliance findings, but also product quality and patient safety.
The path forward is not to slow down AI adoption, but to govern it appropriately.
TAGS: Quality & Compliance Current Good Manufacturing Practices (cGMP) Artificial Intelligence (AI)