pymc.MLDA#

class pymc.MLDA(*args, **kwargs)[source]#

Multi-Level Delayed Acceptance (MLDA) sampling step that uses coarse approximations of a fine model to construct proposals in multiple levels.

MLDA creates a hierarchy of MCMC chains. Chains sample from different posteriors that ideally should be approximations of the fine (top-level) posterior and require less computational effort to evaluate their likelihood.

Each chain runs for a fixed number of iterations (up to subsampling_rate) and then the last sample generated is used as a proposal for the chain in the level above (excluding when variance reduction is used, where a random sample from the generated sequence is used). The bottom-level chain is a MetropolisMLDA or DEMetropolisZMLDA sampler.

The algorithm achieves higher acceptance rate and effective sample sizes than other samplers if the coarse models are sufficiently good approximations of the fine one.

Parameters
coarse_modelslist

List of coarse (multi-level) models, where the first model is the coarsest one (level=0) and the last model is the second finest one (level=L-1 where L is the number of levels). Note this list excludes the model passed to the model argument above, which is the finest available.

varslist

List of value variables for sampler

base_samplerstr

Sampler used in the base (coarsest) chain. Can be ‘Metropolis’ or ‘DEMetropolisZ’. Defaults to ‘DEMetropolisZ’.

base_Sstandard deviation of base proposal covariance matrix

Some measure of variance to parameterize base proposal distribution

base_proposal_distfunction

Function that returns zero-mean deviates when parameterized with S (and n). Defaults to normal. This is the proposal used in the coarsest (base) chain, i.e. level=0.

base_scalingscalar or array

Initial scale factor for base proposal. Defaults to 1 if base_sampler is ‘Metropolis’ and to 0.001 if base_sampler is ‘DEMetropolisZ’.

tunebool

Flag for tuning in the base proposal. If base_sampler is ‘Metropolis’ it should be True or False and defaults to True. Note that this is overidden by the tune parameter in sample(). For example when calling step=MLDA(tune=False, …) and then sample(step=step, tune=200, …), tuning will be activated for the first 200 steps. If base_sampler is ‘DEMetropolisZ’, it should be True. For ‘DEMetropolisZ’, there is a separate argument base_tune_target which allows modifying the type of tuning.

base_tune_target: string

Defines the type of tuning that is performed when base_sampler is ‘DEMetropolisZ’. Allowable values are ‘lambda, ‘scaling’ or None and it defaults to ‘lambda’.

base_tune_intervalint

The frequency of tuning for the base proposal. Defaults to 100 iterations.

base_lambfloat

Lambda parameter of the base level DE proposal mechanism. Only applicable when base_sampler is ‘DEMetropolisZ’. Defaults to 2.38 / sqrt(2 * ndim)

base_tune_drop_fractionfloat

Fraction of tuning steps that will be removed from the base level samplers history when the tuning ends. Only applicable when base_sampler is ‘DEMetropolisZ’. Defaults to 0.9 - keeping the last 10% of tuning steps for good mixing while removing 90% of potentially unconverged tuning positions.

modelPyMC Model

Optional model for sampling step. Defaults to None (taken from context). This model should be the finest of all multilevel models.

modestr or Mode instance.

Compilation mode passed to Aesara functions

subsampling_ratesinteger or list of integers

One interger for all levels or a list with one number for each level (excluding the finest level). This is the number of samples generated in level l-1 to propose a sample for level l - applies to all levels excluding the finest level). The length of the list needs to be the same as the length of coarse_models.

base_blockedbool

Flag to choose whether base sampler (level=0) is a Compound MetropolisMLDA step (base_blocked=False) or a blocked MetropolisMLDA step (base_blocked=True). Only applicable when base_sampler=’Metropolis’.

variance_reduction: bool

Calculate and store quantities of interest and quantity of interest differences between levels to enable computing a variance-reduced sum of the quantity of interest after sampling. In order to use variance reduction, the user needs to do the following when defining the PyMC model (also demonstrated in the example notebook):

  • Include a pm.Data() variable with the name Q in the

model description of all levels. - Use an Aesara Op to calculate the forward model (or the combination of a forward model and a likelihood). This Op should have a perform() method which (in addition to all the other calculations), calculates the quantity of interest and stores it to the variable Q of the PyMC model, using the set_value() function.

When variance_reduction=True, all subchains run for a fixed number of iterations (equal to subsampling_rates) and a random sample is selected from the generated sequence (instead of the last sample which is selected when variance_reduction=False).

store_Q_fine: bool

Store the values of the quantity of interest from the fine chain.

adaptive_error_modelbool

When True, MLDA will use the adaptive error model method proposed in [Cui2012]. The method requires the likelihood of the model to be adaptive and a forward model to be defined and fed to the sampler. Thus, it only works when the user does the following (also demonstrated in the example notebook):

  • Include in the model definition at all levels,

the extra variable model_output, which will capture the forward model outputs. Also include in the model definition at all levels except the finest one, the extra variables mu_B and Sigma_B, which will capture the bias between different levels. All these variables should be instantiated using the pm.Data method. - Use an Aesara Op to define the forward model (and optionally the likelihood) for all levels. The Op needs to store the result of each forward model calculation to the variable model_output of the PyMC model, using the set_value() function. - Define a Multivariate Normal likelihood (either using the standard PyMC API or within an Op) which has mean equal to the forward model output plus mu_B and covariance equal to the model error plus Sigma_B.

Given the above, MLDA will capture and iteratively update the bias terms internally for all level pairs and will correct each level so that all levels’ forward models aim to estimate the finest level’s forward model.

References

Dodwell2019

Dodwell, Tim & Ketelsen, Chris & Scheichl,

Robert & Teckentrup, Aretha. (2019). Multilevel Markov Chain Monte Carlo. SIAM Review. 61. 509-545.

Cui2012

Cui, Tiangang & Fox, Colin & O’Sullivan, Michael.

(2012). Adaptive Error Modelling in MCMC Sampling for Large Scale Inverse Problems.

Examples

Methods

MLDA.__init__(coarse_models[, vars, ...])

Parameters

MLDA.astep(q0)

One MLDA step, given current sample q0

MLDA.competence(var, has_grad)

Return MLDA competence for given var/has_grad.

MLDA.step(point)

Perform a single step of the sampler.

MLDA.stop_tuning()

MLDA.update_error_estimate(accepted, ...)

Updates the adaptive error model estimate with the latest accepted forward model output difference.

MLDA.update_vr_variables(accepted, skipped_logp)

Updates all the variables necessary for VR to work.

Attributes

default_blocked

generates_stats

name

stats_dtypes

vars