The Benefits of Process Characterization in Process Development

Process Optimization The desire for a robust and repeatable manufacturing process is shared by every organization that has a therapy or product in development and the only way to demonstrate that this desired state has been achieved is through the collection and analysis of data. 

But generating process data is expensive. Immediate costs include materials, process, analytics, schedule, and the focus of highly-compensated subject matter experts. Jumping in to collect data without appropriate pre-work (a.k.a. the “ready, FIRE … aim” approach) is also a pretty reliable way of amplifying those costs and wasting effort.

The opportunities to create wasted effort include (but are not limited to) collecting the wrong data, not collecting all the data needed, collecting the right data the wrong way and the focus of this discussion, collecting data when none is needed. The good news is that employing the “ready, aim, FIRE” approach to process characterization will mitigate these risks and thereby maximize the return on data investment.

In the following (redacted) example, the process owner requested a “DOE” to characterize the hard-gel capsule filling system on board their newly acquired, next generation encapsulation machine as part of preparations to produce Phase III clinical supplies. The process owner required that next generation weight performance meet or exceed that of the current generation machine from which all earlier-phase clinical supplies had been produced and implied that critical process parameters, their acceptable ranges, and optimal settings were yet to be determined.

Process characterization methodology requires identification, collection, and analysis of all the relevant historical data as a foundation for designing additional data collection. In this case, that included fill-weight data from three (3) replicate runs on the next generation encapsulation machine (collected under site acceptance test [SAT] protocol); and from the current generation machine, fill-weight data from three (3) clinical batches, one (1) filling system study of weight variation across the operating range, and one (1) study to quantify empty capsule weight variation.

While awaiting execution of the SAT, the current generation historical data was analyzed to determine the performance criteria for the next generation filler. The results showed overall fill weight capability (a comparison of voice of the customer to voice of the process), of Ppk = 2.7 (lower 95% confidence at 2.5), which is indicative of a highly capable process in which, for example, eight (8) standard deviations fit between the mean and the closest spec limit and there are far fewer than one (1) defect per million opportunities. This pleasant, yet attention-getting surprise began to cast doubt on the value of creating more, next generation process data to determine critical process parameters, their acceptable ranges, and optimal settings.

Along with historical data analysis, proper process characterization methodology requires leveraging existing process/product subject matter expert knowledge to document how the process is thought to work (a Process Quality Risk Assessment tool is highly recommended for this step). Because of the high level of capability exhibited by the current process it, in addition to the next generation filling process was analyzed. This revealed that both filling systems operated under the same procedural controls (same CMO) and had essentially identical equipment—for example, the same positive displacement pump/control system, reservoir, delivery lines, and nozzles. It seemed increasingly likely that the critical process parameters, their acceptable ranges, and the optimal settings were already in hand.

Upon completion of the next generation filler SAT, the analysis indicated (as was now expected) a high level of capability, at Ppk = 3.6, with lower 95% confidence at 3.4.

So, that was it. There was no value in creating more, next generation process data to determine critical process parameters, their acceptable ranges, and their optimal settings. It was time to lock it down and move on

Certainly, failure to fill the correct amount into a capsule could pose a significant risk to patient safety, but the evidence was overwhelming that the likelihood of that happening was extremely small … as long as the controls were locked down. The appropriate course of action, rather than collect more data was to employ process quality risk management to formally document the controls and the quality system, to establish objective evidence of those controls, e.g., operating and maintenance procedures, and qualification and technical reports. Additionally, next generation fill-weight verification should continue with manufacture Phase III clinical supplies (at no additional cost). There was still work to be done, but “DOE” would not be part of it.

A significant amount of waste was avoided by employing the “ready, aim, fire” approach to process characterization. In this case, the process owner had already spent the time and money needed to create the raw data and information required for a compliant, capable filling process but had failed to turn that raw data and information into knowledge (akin perhaps, to a farmer preparing the soil, planting the seeds, and forgetting to water). In general, experience shows that a comprehensive and systematic analysis of the historical data and process/product knowledge—to precisely define the information gap and what’s required to close it—greatly increases the likelihood of attaining cost, schedule, and regulatory goals. 

Gary Hyde
Principle Consultant of Product Lifecycle Management
Published February 21, 2017

Gary Hyde is a Principle Consultant of Product Lifecycle Management with ProPharma Group and a Johnson & Johnson Certified Six Sigma/Lean Master Black Belt.



Learn more about ProPharma Group's Program Management services.
Contact us to get in touch with our subject matter experts for a customized presentation.


Program Management, Other