I would agree. While there is no Guidance or regulation, 30 days has become the general practice and some companies have received regulatory citations for constantly exceeding it without solid justification. When regulatory agencies act on this basis, it’s known as “an Expectation” and is usually noted by the regulators as not conforming to industry-based standards of practice. That said regarding 30 days, there are many medical products (such as biologics/vaccines/etc.) that require tighter windows, since once samples are pulled/aliquoted, rapid changes can occur and it’s prudent to get results (at least for certain tests) as soon as possible. From time to time regulatory bodies suggest/imply that test windows should be tightened. It’s fascinating to see the many ways that companies deal with setting their test windows. As noted in the previous response, some are timepoint driven and sliding scales are created to govern the test window. It follows that it’s good practice to establish target windows for each stage of the stability operations process: pull windows, interim storage/time out of refrigeration, start of testing, completion of testing, results review and approval, final Quality release of official results. Slippage in any of these areas can impact the subsequent stages and start to loom on regulatory radar. We know that real life periodically intervenes to disrupt our plans and it’s fine to register a deviation, investigate and take corrective and preventive actions. We have the obligation to track these events and present metrics that they are trending downward in response to our quality systems. Overall, well thought-out SOPs based on solid justifications for our test windows and detailed pathways for non-routine events are the best ways to keep us away from compliance issues.