- class pymc.MLDA(*args, **kwargs)#
Multi-Level Delayed Acceptance (MLDA) sampling step that uses coarse approximations of a fine model to construct proposals in multiple levels.
MLDA creates a hierarchy of MCMC chains. Chains sample from different posteriors that ideally should be approximations of the fine (top-level) posterior and require less computational effort to evaluate their likelihood.
Each chain runs for a fixed number of iterations (up to subsampling_rate) and then the last sample generated is used as a proposal for the chain in the level above (excluding when variance reduction is used, where a random sample from the generated sequence is used). The bottom-level chain is a MetropolisMLDA or DEMetropolisZMLDA sampler.
The algorithm achieves higher acceptance rate and effective sample sizes than other samplers if the coarse models are sufficiently good approximations of the fine one.
List of coarse (multi-level) models, where the first model is the coarsest one (level=0) and the last model is the second finest one (level=L-1 where L is the number of levels). Note this list excludes the model passed to the model argument above, which is the finest available.
List of value variables for sampler
Sampler used in the base (coarsest) chain. Can be ‘Metropolis’ or ‘DEMetropolisZ’. Defaults to ‘DEMetropolisZ’.
Some measure of variance to parameterize base proposal distribution
Function that returns zero-mean deviates when parameterized with S (and n). Defaults to normal. This is the proposal used in the coarsest (base) chain, i.e. level=0.
- base_scalingscalar or
Initial scale factor for base proposal. Defaults to 1 if base_sampler is ‘Metropolis’ and to 0.001 if base_sampler is ‘DEMetropolisZ’.
Flag for tuning in the base proposal. If base_sampler is ‘Metropolis’ it should be True or False and defaults to True. Note that this is overidden by the tune parameter in sample(). For example when calling step=MLDA(tune=False, …) and then sample(step=step, tune=200, …), tuning will be activated for the first 200 steps. If base_sampler is ‘DEMetropolisZ’, it should be True. For ‘DEMetropolisZ’, there is a separate argument base_tune_target which allows modifying the type of tuning.
- base_tune_target: string
Defines the type of tuning that is performed when base_sampler is ‘DEMetropolisZ’. Allowable values are ‘lambda, ‘scaling’ or None and it defaults to ‘lambda’.
The frequency of tuning for the base proposal. Defaults to 100 iterations.
Lambda parameter of the base level DE proposal mechanism. Only applicable when base_sampler is ‘DEMetropolisZ’. Defaults to 2.38 / sqrt(2 * ndim)
Fraction of tuning steps that will be removed from the base level samplers history when the tuning ends. Only applicable when base_sampler is ‘DEMetropolisZ’. Defaults to 0.9 - keeping the last 10% of tuning steps for good mixing while removing 90% of potentially unconverged tuning positions.
Optional model for sampling step. Defaults to None (taken from context). This model should be the finest of all multilevel models.
stror Mode instance.
Compilation mode passed to Aesara functions
One interger for all levels or a list with one number for each level (excluding the finest level). This is the number of samples generated in level l-1 to propose a sample for level l - applies to all levels excluding the finest level). The length of the list needs to be the same as the length of coarse_models.
Flag to choose whether base sampler (level=0) is a Compound MetropolisMLDA step (base_blocked=False) or a blocked MetropolisMLDA step (base_blocked=True). Only applicable when base_sampler=’Metropolis’.
- variance_reduction: bool
Calculate and store quantities of interest and quantity of interest differences between levels to enable computing a variance-reduced sum of the quantity of interest after sampling. In order to use variance reduction, the user needs to do the following when defining the PyMC model (also demonstrated in the example notebook):
Include a pm.Data() variable with the name Q in the
model description of all levels. - Use an Aesara Op to calculate the forward model (or the combination of a forward model and a likelihood). This Op should have a perform() method which (in addition to all the other calculations), calculates the quantity of interest and stores it to the variable Q of the PyMC model, using the set_value() function.
When variance_reduction=True, all subchains run for a fixed number of iterations (equal to subsampling_rates) and a random sample is selected from the generated sequence (instead of the last sample which is selected when variance_reduction=False).
- store_Q_fine: bool
Store the values of the quantity of interest from the fine chain.
When True, MLDA will use the adaptive error model method proposed in [Cui2012]. The method requires the likelihood of the model to be adaptive and a forward model to be defined and fed to the sampler. Thus, it only works when the user does the following (also demonstrated in the example notebook):
Include in the model definition at all levels,
the extra variable model_output, which will capture the forward model outputs. Also include in the model definition at all levels except the finest one, the extra variables mu_B and Sigma_B, which will capture the bias between different levels. All these variables should be instantiated using the pm.Data method. - Use an Aesara Op to define the forward model (and optionally the likelihood) for all levels. The Op needs to store the result of each forward model calculation to the variable model_output of the PyMC model, using the set_value() function. - Define a Multivariate Normal likelihood (either using the standard PyMC API or within an Op) which has mean equal to the forward model output plus mu_B and covariance equal to the model error plus Sigma_B.
Given the above, MLDA will capture and iteratively update the bias terms internally for all level pairs and will correct each level so that all levels’ forward models aim to estimate the finest level’s forward model.
Dodwell, Tim & Ketelsen, Chris & Scheichl,
Robert & Teckentrup, Aretha. (2019). Multilevel Markov Chain Monte Carlo. SIAM Review. 61. 509-545.
Cui, Tiangang & Fox, Colin & O’Sullivan, Michael.
(2012). Adaptive Error Modelling in MCMC Sampling for Large Scale Inverse Problems.
MLDA.__init__(coarse_models[, vars, ...])
One MLDA step, given current sample q0
Return MLDA competence for given var/has_grad.
Perform a single step of the sampler.
Updates the adaptive error model estimate with the latest accepted forward model output difference.
Updates all the variables necessary for VR to work.