Posts tagged gradient-free inference
DEMetropolis(Z) Sampler Tuning
- 18 January 2023
For continuous variables, the default PyMC sampler (NUTS
) requires that gradients are computed, which PyMC does through autodifferentiation. However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary. The DEMetropolisZ
sampler is an efficient choice for gradient-free inference. The implementation of DEMetropolisZ
in PyMC is based on ter Braak and Vrugt [2008] but with a modified tuning scheme. This notebook compares various tuning parameter settings for the sampler, including the drop_tune_fraction
parameter which was introduced in PyMC.
DEMetropolis and DEMetropolis(Z) Algorithm Comparisons
- 18 January 2023
For continuous variables, the default PyMC sampler (NUTS
) requires that gradients are computed, which PyMC does through autodifferentiation. However, in some cases, a PyMC model may not be supplied with gradients (for example, by evaluating a numerical model outside of PyMC) and an alternative sampler is necessary. Differential evolution (DE) Metropolis samplers are an efficient choice for gradient-free inference. This notebook compares the DEMetropolis
and the DEMetropolisZ
samplers in PyMC to help determine which is a better option for a given problem.
ODE Lotka-Volterra With Bayesian Inference in Multiple Ways
- 16 January 2023
The purpose of this notebook is to demonstrate how to perform Bayesian inference on a system of ordinary differential equations (ODEs), both with and without gradients. The accuracy and efficiency of different samplers are compared.