Late last year we started tracking and trending stability data in real time using software that analyzes results entered by the testing lab each day. The system is designed to flag potentially aberrant results and send notifications to stability and lab management. The intention is for the labs to initially investigate the value to ensure no lab error caused the panic.
Last week one of our established oral dosage antacid type drugs was flagged as aberrant for potency at the 12-month interval and was projected to fail at 24 months vs. its 48-month shelf life.
Visually I can see that the first few intervals are flat, but on the lower side, and the 12-month interval suddenly drops. With such extensive historical data, stability folk like me are not worried, but the lab and QA management are ready to initiate a recall! The lab sees no hint of error on their side (of course) after their investigation that did not include any retesting.
I believe the batch was initially 1 to 2% lower than the main population, but that has been seen several times in the past and is well within the process limits. The relatively flat degradation pattern through 9-months is typical, so either the 12-month data is erroneous or simply represents a single sample on the low end, but still well within specification (>90%). I think we’re seeing some of the key pain points of our automated statistical tracking process and it just need’s tweaking—where should we go from here?
Justify a potency re-test or perhaps add a data point for the next monthly interval? What would you do?