iguitoParticipantJanuary 11, 2023 at 3:10 amPost count: 1
Hello stability community!
I´m just starting in the stability world and we have the first challenge… what is the recommended time to perform analytical testing once the product has left the climatic chamber? We don´t find any guidelines on this and we have heard that about one month is acceptable, but we have to include this topic in our SOP and don´t find any reference about it. how long do you perform the analysis in your lab? one month? any kind of guideline to justify it?
Thank you so much in advance!jeffdlowellParticipantJanuary 11, 2023 at 4:15 pmPost count: 2
My experience has been one month, however with shorter timepoints (3 days, 1 week) I have seen that reduced.John O’NeillModeratorJanuary 18, 2023 at 9:12 amPost count: 63
I would agree. While there is no Guidance or regulation, 30 days has become the general practice and some companies have received regulatory citations for constantly exceeding it without solid justification. When regulatory agencies act on this basis, it’s known as “an Expectation” and is usually noted by the regulators as not conforming to industry-based standards of practice. That said regarding 30 days, there are many medical products (such as biologics/vaccines/etc.) that require tighter windows, since once samples are pulled/aliquoted, rapid changes can occur and it’s prudent to get results (at least for certain tests) as soon as possible. From time to time regulatory bodies suggest/imply that test windows should be tightened. It’s fascinating to see the many ways that companies deal with setting their test windows. As noted in the previous response, some are timepoint driven and sliding scales are created to govern the test window. It follows that it’s good practice to establish target windows for each stage of the stability operations process: pull windows, interim storage/time out of refrigeration, start of testing, completion of testing, results review and approval, final Quality release of official results. Slippage in any of these areas can impact the subsequent stages and start to loom on regulatory radar. We know that real life periodically intervenes to disrupt our plans and it’s fine to register a deviation, investigate and take corrective and preventive actions. We have the obligation to track these events and present metrics that they are trending downward in response to our quality systems. Overall, well thought-out SOPs based on solid justifications for our test windows and detailed pathways for non-routine events are the best ways to keep us away from compliance issues.Walter RouthModeratorJanuary 18, 2023 at 2:26 pmPost count: 32
I certainly agree with the previous posts regarding 1 month. Two things to add, though. First, you probably should consider something shorter than a month for shorter intervals. The issue I believe an auditor has regards how long it took to recognize an issue (I don’t think they’ve thinking about how time has passed and degradation has continued while awaiting testing.). After 2 or 3 months a one-month delay to find an OOS or OOT issue is acceptable, but at at 1-month interval, you probably should consider a 1-week or 2-week test timing requirement. The Second thing I’d add is that the definition of the actions required at 30 days are important. Ideally (if you have the resources) it should be the testing is completed and verified. One company has a requirement simply to initiate testing in 30 days. This, in my opinion, is dangerous since from a probable auditor’s opinion that would allow too much time for the company to recognize a significant OOT or OOS and take field action if necessary.
- You must be logged in to reply to this topic.