Home Forums General Discussion Statistical analysis

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
  • stability lady
    Post count: 1

    In my work place, we perform statistical evaluations for products with approved shelf life only 6 months prior to shelf life expiration, and not earlier (upon request, and not routinely).
    I could not find any explanation for it. Does anyone know any explanation?

    I also have a question regarding the decision tree in ICH Q1E- is it used for new products only? Or do we have to use it every time we want to evaluate the stability data of a specific lot?

    Thank you!

    John O’Neill
    Post count: 75

    Hello Stability Lady,
    I don’t have an explanation for the practice you mentioned other than someone having a concern about “validating” the shelf life expiration as the study nears its end. In my experience, commercial product stability study results get a statistical evaluation update at all annual intervals and at or near (as in your case) the final pull point. This would be based more on regulatory “expectation” rather than a requirement. The Q1E guidance lets itself off the hook with the statement: “The recommendations on statistical approaches in this guidance are not intended to imply that use of statistical evaluation is preferred when it can be justified to be unnecessary”, but the language following implies the thought: “but it’s your funeral”. I would visit my internal quality assurance personnel to ask if they have a justification on record for the current policy.
    The decision tree is primarily for the initial setting of shelf life or re-test periods, but statistical confirmation practices can also be found in the guidance.

    Walter Routh
    Post count: 41

    There certainly isn’t a regulatory rationale for having that policy, but I can think of a few business reasons. The first, if your shelf-life is 24-months or less, would be that until 18-months you might not have enough data for sound statistics, and if the 95% confidence interval gave disturbing projections they could be a false alarm simply because of the fewer data points. The other business rationale could simply be to reduce the resources needed since so many batches being analyzed could really balloon to a heavy statistical workload. The justification for waiting could simply be that the filed data shows that there’s really no danger of OOS so why not wait? Both of these business rationales might be worth some risk financially, but they’re based on the flawed assumption that statistics is only trying to justify shelf-life when, in fact, they are also trying to catch unexpected shifts in product performance long before shelf-life, if possible. You need to at least have some mechanism in place to assure the relevant agency that you’re checking lot performance routinely throughout the studies. If not statistics, then at least some sort of visual check of your graphical or tabular data.

Viewing 3 posts - 1 through 3 (of 3 total)
  • You must be logged in to reply to this topic.