pymc.fit#

pymc.fit(n=10000, method='advi', model=None, random_seed=None, start=None, start_sigma=None, inf_kwargs=None, **kwargs)[source]#

Handy shortcut for using inference methods in functional way

Parameters:
n: `int`

number of iterations

method: str or :class:`Inference`

string name is case insensitive in:

  • ‘advi’ for ADVI

  • ‘fullrank_advi’ for FullRankADVI

  • ‘svgd’ for Stein Variational Gradient Descent

  • ‘asvgd’ for Amortized Stein Variational Gradient Descent

model: :class:`Model`

PyMC model for inference

random_seed: None or int
inf_kwargs: dict

additional kwargs passed to Inference

start: `dict[str, np.ndarray]` or `StartDict`

starting point for inference

start_sigma: `dict[str, np.ndarray]`

starting standard deviation for inference, only available for method ‘advi’

Returns:
Approximation
Other Parameters:
score: bool

evaluate loss on each iteration or not

callbacks: list[function: (Approximation, losses, i) -> None]

calls provided functions after each iteration step

progressbar: bool

whether to show progressbar or not

obj_n_mc: `int`

Number of monte carlo samples used for approximation of objective gradients

tf_n_mc: `int`

Number of monte carlo samples used for approximation of test function gradients

obj_optimizer: function (grads, params) -> updates

Optimizer that is used for objective params

test_optimizer: function (grads, params) -> updates

Optimizer that is used for test function params

more_obj_params: `list`

Add custom params for objective optimizer

more_tf_params: `list`

Add custom params for test function optimizer

more_updates: `dict`

Add custom updates to resulting updates

total_grad_norm_constraint: `float`

Bounds gradient norm, prevents exploding gradient problem

fn_kwargs: `dict`

Add kwargs to pytensor.function (e.g. {‘profile’: True})

more_replacements: `dict`

Apply custom replacements before calculating gradients