pymc.ASVGD#

class pymc.ASVGD(approx=None, estimator=<class 'pymc.variational.operators.KSD'>, kernel=<pymc.variational.test_functions.RBF object>, **kwargs)[source]#

Amortized Stein Variational Gradient Descent

not suggested to use

This inference is based on Kernelized Stein Discrepancy it’s main idea is to move initial noisy particles so that they fit target distribution best.

Algorithm is outlined below

Input: Parametrized random generator \(R_{\theta}\)

Output: \(R_{\theta^{*}}\) that approximates the target distribution.

\[\begin{split}\Delta x_i &= \hat{\phi}^{*}(x_i) \\ \hat{\phi}^{*}(x) &= \frac{1}{n}\sum^{n}_{j=1}[k(x_j,x) \nabla_{x_j} logp(x_j)+ \nabla_{x_j} k(x_j,x)] \\ \Delta_{\theta} &= \frac{1}{n}\sum^{n}_{i=1}\Delta x_i\frac{\partial x_i}{\partial \theta}\end{split}\]
Parameters:
approx: :class:`Approximation`

default is FullRank but can be any

kernel: `callable`

kernel function for KSD \(f(histogram) -> (k(x,.), \nabla_x k(x,.))\)

model: :class:`Model`
kwargs: kwargs for gradient estimator

References

  • Dilin Wang, Yihao Feng, Qiang Liu (2016) Learning to Sample Using Stein Discrepancy http://bayesiandeeplearning.org/papers/BDL_21.pdf

  • Dilin Wang, Qiang Liu (2016) Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning arXiv:1611.01722

  • Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng (2017) Stein Variational Policy Gradient arXiv:1704.02399

Methods

ASVGD.__init__([approx, estimator, kernel])

ASVGD.fit([n, score, callbacks, ...])

Perform Operator Variational Inference

ASVGD.refine(n[, progressbar, progressbar_theme])

Refine the solution using the last compiled step function

ASVGD.run_profiling([n, score, obj_n_mc])

Attributes

approx