Stabilitarians and Stakeholders ask over and over; “What are effective measurements to evaluate a Stability Function?”; “How long should the various processes take?”; “How can you repair or eliminate problems of deviations, time over-runs, & date discrepancies?”
Today we’ll explore these question and more, taking into consideration guidances, regulations, industry best practices and an undocumented set of rules called “regulatory expectations”.
Let’s start by asking “what’s “stability indicating” for our Stability Function?” Do we measure and evaluate such parameters as:
- Sample pulls within assigned time window
- Sample loss (or unintended gain) throughout the process of sample control
- Tests completed in assigned time window
- Data review in assigned time window
- Chamber excursions and repair time
- Training completions in assigned time window
- Total number of Stability Program-related deviations
- Stability-related deviations/investigations closed on time
- Trends in preventable errors
- Audit Observations /repeat observations
What others can you think of? These are items to monitor, trend and improve upon.
Zeroing in on definitions & windows, it’s important to define such things as:
- Start dates for studies, testing, and data review
- Completion criteria; when is it done and when is it “done done”?
- Justification for time windows; where did these come from and why are they acceptable?
In general, we do what we do in ways that align with national requirements, official guidances from recognized bodies and adopted by regulatory authorities, Industry Best Practices for implementing Good Manufacturing Practices (GMPs), and “regulatory expectations”, or unwritten standards that regulators have come to expect by observing the way industry implements the previous three types of constraints. The “30 Day Rule” is an example of an expectation that has no basis in regulation or guidance, but has become an expectation since it is often practiced by industry in the conduct of GMPs.
Dates and Windows to Watch
Each of our organizations should have a justification for a distinct definition of key way-points associated with the stability process. These might include Dates of:
- “Manufacture” / Start of Shelf Life,
- Samples in chamber
- Start of Stability Study
- Both ends of a Pull Window
- Both ends of a Test Window (start/completion)
- Both ends of Review & Approval Windows
Date of “Manufacture” (DOM) vs. Date in Chamber (DIC)
A perennial topic of debate has been whether to use Date of Manufacture or Date in Chamber as the starting point of a stability study. Follow this PSDG discussion exchange captured from an Industry List Server to see the variety of opinions that exist around this topic.
Pharmaceutical Stability Professionals, For stability studies on commercial drug products (at Long-Term conditions), when is the appropriate date to begin counting days until the first stability pull interval. I have an SOP that instructs me to use the Date of Manufacture as the study start date for Long-Term stability studies. For accelerated or intermediate conditions, however, the SOP instructs me to use the date the samples were placed into the chamber as the study start date. I’m curious to see if my SOP is acceptable as written since it is reflective of the commercial process or if it needs to be updated to use the date the sample are placed into the chamber as the study start date.
Response 1: FDA thinking on when the clock starts
I have had stability studies start both ways for CRT. A Manufacturing date 01Apr20 would be the accurate 3-month date for Controlled Room Temperature storage, but a 01May20 date on Stability would provide a worse case, especially toward the end of the study. Your SOP is appropriate, and I would argue either approach is acceptable. See the regulatory citation-
1998 Draft FDA Stability Guidance of Drug Substances/Products, page 44 brings Release into the picture: Link => http://www.fda.gov/cder/guidance/1707dft.pdf
“The computation of the expiration dating period of the drug product should begin no later than the time of quality control release of that batch, and the date of release should generally not exceed 30 days from the production date, regardless of the packaging date. The data generated in support of the assigned expiration dating period should be from long-term studies under the storage conditions recommended in the labeling. If the expiration date includes only a month and year, the product should meet specifications through the last day of the month.”
Response 2: Date of “Production” Start of Study vs Start of Shelf Life
You should have an SOP which defines the date of production, release date and zero-time for the stability storage program.
A stability storage program starts from the date samples are placed in a stability chamber. Pull dates start from the date of placing samples in the chamber.
You write “I have an SOP that instructs me to use the Date of Manufacture as the study start date for Long-Term stability studies.”. According to EU GMP, the date of batch production is defined as the date of combining the active ingredient with other ingredients. How can this be a start of a stability study? Samples need to be incubated under constant and controlled temp and humidity. In my opinion, SOPs should reflect this.
Summary of the Discussion and potential path(s) forward:
There are a wide range of definitions in place. The choice to use Date of Manufacture or Date in Chamber significantly impacts your stability studies and shelf life calculation. While some have SOPs that incorporate both, an ideal solution remains nebulous. However, scientifically based justifications for our definitions and selection of key dates and time windows can minimize regulatory questions, challenges, and citations.
Key Dates and Windows
Below are some observations around key dates and windows associated with the stability process that may help inform our decisions regarding definitions and time-related metrics.
Date of Manufacturing
- Whether you use the date that API touches any other ingredient, the final step in the mixing process, date into a package, date sterilized or some other milestone, pick and justify what makes sense to your process and stay with it.
Date of Release
- Often has a time requirement from a defined process completion point (30 days is common, but different dosage forms may require a shorter time frame)
- Procedures should define which part of the release process dates the official release (test completion, review, approval, etc.)
Date of Packaging
- Should not be presumed to occur at time of manufacturing, though some regulatory bodies prefer this. Others may encourage additional releases at subsequent steps
- May occur months later in a different part of the world
- If there are multiple steps, the final step that protects the product could be the packaging date.
- Define whether labeling impacts the packaging date
- Document any further “release” involved for packaging
- SOP may place time limits for packaging following manufacture
- Short time frames (</= 30 days) preferred by Regulators
- Some companies take up to and beyond 6 months to package
Date on Stability (DOS)
- Generally defined as date in chamber.
- For long term studies, some regulators prefer a study start as date of manufacture, but don’t neglect stability study-defining factors such as access-limited, Temp/RH controlled, monitored, documented, and reviewed storage conditions. When these are absent, how can we prove that we didn’t artificially protect the product prior to date in chamber? That said, some have claimed that elapsed time between DOM and DOS adds to a worst-case scenario borne by the manufacturer
- SOPs should make your DOS policy absolutely clear
- Some require that DOS occur within 30 days of release, or benchmark “Time Zero” testing must be performed in addition to the earlier Release Testing
Stability Protocols put your definitions and windows into action within the Stability Function
A good Protocol lists all the starts, stops, and windows. A stability protocol:
- Fulfills compliance requirements
- Is a repository for justifications
- Encompasses Good Documentation Practices
Some additional discussion of key considerations to take into account when developing metrics for the Stability Function follow.
What’s a “month”?
Here’s another topic of debate and all 4 of these methods are in use:
- Whatever the calendar says
- 4 weeks
- 30 days
- # of days to get to this date next month
Pick which one floats your boat and support it with your best justification.
Does a day end where you are, or where your CPU is?
Some international organizations’ clocks are based overseas from your location and when you log a task as completed at 8 pm in your location, all of your documents emanating from an overseas CPU may report it as having occurred the morning of the following day. In those cases, deadlines may be artificially missed and undeserved deviations recorded.
Fool-proofing how dates are formatted
This can be crucial as well. Depending on where you’re standing, 6/5/2022 may not equal 5/6/2022 but does equal 5 June 2022. We shouldn’t assume that other cultures will remember to use the conventions of their parent organizations. International interpretations can be avoided through the use of an unambiguous system where the month is in alpha characters.
A Pull Window
A pull window is the range of time allowed before and after the scheduled calendar date for the withdrawing of samples for testing at a specific time point. A wide range of practices exist across industry, the most conservative being pull on exact date, contrasted with a liberal sliding scale with up to 30 days leeway. Regulators have applied recommendations ranging from not pulling in advance and allowing a “next business day” window, to a week or two window on either side of the scheduled date.
Test Window (start/completion)
The FDA has applied a 30-day test completion window “expectation” to many traditional pharmaceutical products and readily issues 483s for multiple unjustified non-compliances. In a few cases, Regulators have pushed for shorter time windows for test completion (PSDG feedback). SOPs should define “start” and “completion” points with respect to sample pulls, testing, reviews, and approvals. Build extension periods into SOPs to allow for non-business days and completion of quality investigations.
Should you shrink your own window?
It’s always good to challenge our metrics and tighten them when practical. Some shorten their windows for testing samples from accelerated conditions, some when serial testing from a single sample or when a particular product attribute deteriorates quickly under lab storage or when a registration or clinical trial deadline looms. Whatever your practice, don’t shorten windows when it would lead to a significant increase in missed targets and deadlines.
Review & Approval Windows
Regulators are generally not specific regarding these parameters, but inspections flow more smoothly when they are well-defined in SOPs, leaving no room for questions. Reasonably short periods are desired for these windows such as 1-2 business days for a supervisor review and 5-10 days for final Quality Unit approval. Build extension periods into SOPs to allow for non-business days and quality investigations.
The Critical Importance of Justifications
Do you have a documented scientific basis for the “Why” of your Windows, Procedures, Decisions, and Equipment choices? A lack of justification demonstrates a lack of planning & control and it’s not unusual for many of us to be severely lacking in their application.
The Justification Ladder
- Good: It’s a standard Industry practice (and here is documented evidence of how we know that)
- Better: It’s required by Guidance / Regulation
- Best: We have data to show this is the most scientific, accurate, and reproducible method.
- Unacceptable: We’ve always done it that way and we’ve never been challenged.*
*When this is “all you’ve got”, turn it around to “We’ve analyzed and trended the outcomes of this practice to be confident in continuing it.” Be sure to have documentation to back it up.
Benjamin Franklin is credited with 2 maxims that would apply to the stability process:
“Keep thy shop and thy shop will keep thee.” and “A stitch in time may save nine.”.
We can “keep” our shops and reduce our “stitching” by:
- Developing well-defined & effective Stability Function metrics
- Establishing and Justifying our Stability Windows
- Trending and remediating deviations from metrics
- Pursuing continuous improvement
May your Stability Calendars be current, your Clocks on time & your Scoreboards show you winning.