pymc.KLqp#

class pymc.KLqp(approx, beta=1.0)[source]#

Kullback Leibler Divergence Inference

General approach to fit Approximations that define \(logq\) by maximizing ELBO (Evidence Lower Bound). In some cases rescaling the regularization term KL may be beneficial

\[ELBO_\beta = \log p(D|\theta) - \beta KL(q||p)\]
Parameters:
approx: :class:`Approximation`

Approximation to fit, it is required to have logQ

beta: float

Scales the regularization term in ELBO (see Christopher P. Burgess et al., 2017)

References

  • Christopher P. Burgess et al. (NIPS, 2017) Understanding disentangling in \(\beta\)-VAE arXiv preprint 1804.03599

Methods

KLqp.__init__(approx[, beta])

KLqp.fit([n, score, callbacks, progressbar])

Perform Operator Variational Inference

KLqp.refine(n[, progressbar])

Refine the solution using the last compiled step function

KLqp.run_profiling([n, score])

Attributes

approx