Reparameterizing the Weibull Accelerated Failure Time Model#

import arviz as az
import numpy as np
import pymc as pm
import pytensor.tensor as pt

print(f"Running on PyMC v{pm.__version__}")
Running on PyMC v5.18.2

Attention

This notebook uses libraries that are not PyMC dependencies and therefore need to be installed specifically to run this notebook. Open the dropdown below for extra guidance.

Extra dependencies install instructions

In order to run this notebook (either locally or on binder) you won’t only need a working PyMC installation with all optional dependencies, but also to install some extra dependencies. For advise on installing PyMC itself, please refer to Installation

You can install these dependencies with your preferred package manager, we provide as an example the pip and conda commands below.

$ pip install statsmodels

Note that if you want (or need) to install the packages from inside the notebook instead of the command line, you can install the packages by running a variation of the pip command:

import sys

!{sys.executable} -m pip install statsmodels

You should not run !pip install as it might install the package in a different environment and not be available from the Jupyter notebook even if installed.

Another alternative is using conda instead:

$ conda install statsmodels

when installing scientific python packages with conda, we recommend using conda forge

# These dependencies need to be installed separately from PyMC
import statsmodels.api as sm
%config InlineBackend.figure_format = 'retina'
# These seeds are for sampling data observations
RANDOM_SEED = 8927
np.random.seed(RANDOM_SEED)
# Set a seed for reproducibility of posterior results
seed: int = sum(map(ord, "aft_weibull"))
rng: np.random.Generator = np.random.default_rng(seed=seed)
az.style.use("arviz-darkgrid")

Dataset#

The previous example notebook on Bayesian parametric survival analysis introduced two different accelerated failure time (AFT) models: Weibull and log-linear. In this notebook, we present three different parameterizations of the Weibull AFT model.

The data set we’ll use is the flchain R data set, which comes from a medical study investigating the effect of serum free light chain (FLC) on lifespan. Read the full documentation of the data by running:

print(sm.datasets.get_rdataset(package='survival', dataname='flchain').__doc__).

# Fetch and clean data
data = (
    sm.datasets.get_rdataset(package="survival", dataname="flchain")
    .data.sample(500)  # Limit ourselves to 500 observations
    .reset_index(drop=True)
)
y = data.futime.values
censored = ~data["death"].values.astype(bool)
y[:5]
array([ 975, 2272,  138, 4262, 4928])
censored[:5]
array([False,  True, False,  True,  True])

Using pm.Potential#

We have an unique problem when modelling censored data. Strictly speaking, we don’t have any data for censored values: we only know the number of values that were censored. How can we include this information in our model?

One way do this is by making use of pm.Potential. The PyMC2 docs explain its usage very well. Essentially, declaring pm.Potential('x', logp) will add logp to the log-likelihood of the model.

However, pm.Potential only effect probability based sampling this excludes using pm.sample_prior_predictice and pm.sample_posterior_predictive. We can overcome these limitations by using pm.Censored instead. We can model our right-censored data by defining the upper argument of pm.Censored.

Parameterization 1#

This parameterization is an intuitive, straightforward parameterization of the Weibull survival function. This is probably the first parameterization to come to one’s mind.

# normalize the event time between 0 and 1
y_norm = y / np.max(y)
# If censored then observed event time else maximum time
right_censored = [x if x > 0 else np.max(y_norm) for x in y_norm * censored]
with pm.Model() as model_1:
    alpha_sd = 1.0

    mu = pm.Normal("mu", mu=0, sigma=1)
    alpha_raw = pm.Normal("a0", mu=0, sigma=0.1)
    alpha = pm.Deterministic("alpha", pt.exp(alpha_sd * alpha_raw))
    beta = pm.Deterministic("beta", pt.exp(mu / alpha))
    beta_backtransformed = pm.Deterministic("beta_backtransformed", beta * np.max(y))

    latent = pm.Weibull.dist(alpha=alpha, beta=beta)
    y_obs = pm.Censored("Censored_likelihood", latent, upper=right_censored, observed=y_norm)
with model_1:
    idata_param1 = pm.sample(nuts_sampler="numpyro", random_seed=rng)
az.plot_trace(idata_param1, var_names=["alpha", "beta"])
array([[<Axes: title={'center': 'alpha'}>,
        <Axes: title={'center': 'alpha'}>],
       [<Axes: title={'center': 'beta'}>,
        <Axes: title={'center': 'beta'}>]], dtype=object)
../_images/5c3c37d792d4ae704ccf9731de2baa51f798257852aef9b91dcb5d1befddadc9.png
az.summary(idata_param1, var_names=["alpha", "beta", "beta_backtransformed"], round_to=2)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
alpha 0.97 0.06 0.85 1.08 0.00 0.0 2752.10 2282.64 1.0
beta 2.86 0.35 2.23 3.54 0.01 0.0 2677.53 2519.18 1.0
beta_backtransformed 14694.41 1806.79 11450.50 18165.12 34.64 24.5 2677.53 2519.18 1.0

Parameterization 2#

Note that, confusingly, alpha is now called r, and alpha denotes a prior; we maintain this notation to stay faithful to the original implementation in Stan. In this parameterization, we still model the same parameters alpha (now r) and beta.

For more information, see this Stan example model and the corresponding documentation.

with pm.Model() as model_2:
    alpha = pm.Normal("alpha", mu=0, sigma=1)
    r = pm.Gamma("r", alpha=2, beta=1)
    beta = pm.Deterministic("beta", pt.exp(-alpha / r))
    beta_backtransformed = pm.Deterministic("beta_backtransformed", beta * np.max(y))

    latent = pm.Weibull.dist(alpha=r, beta=beta)
    y_obs = pm.Censored("Censored_likelihood", latent, upper=right_censored, observed=y_norm)
with model_2:
    idata_param2 = pm.sample(nuts_sampler="numpyro", random_seed=rng)
az.plot_trace(idata_param2, var_names=["r", "beta"])
array([[<Axes: title={'center': 'r'}>, <Axes: title={'center': 'r'}>],
       [<Axes: title={'center': 'beta'}>,
        <Axes: title={'center': 'beta'}>]], dtype=object)
../_images/56c22e18df74d52bb75350fb86f6f9dc0bef577572b4dcf3936d1aa39a307afe.png
az.summary(idata_param2, var_names=["r", "beta", "beta_backtransformed"], round_to=2)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
r 0.95 0.08 0.80 1.09 0.00 0.00 2907.17 2504.73 1.0
beta 2.93 0.43 2.23 3.81 0.01 0.01 2661.47 2456.59 1.0
beta_backtransformed 15060.96 2201.68 11442.10 19549.83 42.35 29.96 2661.47 2456.59 1.0

Parameterization 3#

In this parameterization, we model the log-linear error distribution with a Gumbel distribution instead of modelling the survival function directly. For more information, see this blog post.

logtime = np.log(y)
# If censored then observed event time else maximum time
right_censored = [x if x > 0 else np.max(logtime) for x in logtime * censored]
with pm.Model() as model_3:
    s = pm.HalfNormal("s", tau=3.0)
    gamma = pm.Normal("gamma", mu=0, sigma=5)

    latent = pm.Gumbel.dist(mu=gamma, beta=s)
    y_obs = pm.Censored("Censored_likelihood", latent, upper=right_censored, observed=logtime)
with model_3:
    idata_param3 = pm.sample(tune=4000, draws=2000, nuts_sampler="numpyro", random_seed=rng)
az.plot_trace(idata_param3)
array([[<Axes: title={'center': 'gamma'}>,
        <Axes: title={'center': 'gamma'}>],
       [<Axes: title={'center': 's'}>, <Axes: title={'center': 's'}>]],
      dtype=object)
../_images/7e062de9dac35861a87f76119ab3ed3a249f19ef978160a757f05f50db6e6996.png
az.summary(idata_param3, round_to=2)
mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_bulk ess_tail r_hat
gamma 9.49 0.21 9.10 9.89 0.0 0.0 2727.89 3337.16 1.0
s 3.54 0.16 3.24 3.85 0.0 0.0 2701.50 4170.84 1.0

Authors#

  • Originally collated by Junpeng Lao on Apr 21, 2018. See original code here.

  • Authored and ported to Jupyter notebook by George Ho on Jul 15, 2018.

  • Updated for compatibility with PyMC v5 by Chris Fonnesbeck on Jan 16, 2023.

  • Updated to replace pm.Potential with pm.Censored by Jonathan Dekermanjian on Nov 25, 2024.

%load_ext watermark
%watermark -n -u -v -iv -w
Last updated: Tue Nov 26 2024

Python implementation: CPython
Python version       : 3.12.7
IPython version      : 8.27.0

numpy      : 1.26.4
statsmodels: 0.14.4
pymc       : 5.18.2
arviz      : 0.20.0
pytensor   : 2.26.3

Watermark: 2.5.0

License notice#

All the notebooks in this example gallery are provided under the MIT License which allows modification, and redistribution for any use provided the copyright and license notices are preserved.

Citing PyMC examples#

To cite this notebook, use the DOI provided by Zenodo for the pymc-examples repository.

Important

Many notebooks are adapted from other sources: blogs, books… In such cases you should cite the original source as well.

Also remember to cite the relevant libraries used by your code.

Here is an citation template in bibtex:

@incollection{citekey,
  author    = "<notebook authors, see above>",
  title     = "<notebook title>",
  editor    = "PyMC Team",
  booktitle = "PyMC examples",
  doi       = "10.5281/zenodo.5654871"
}

which once rendered could look like: