pymc.sgd#

pymc.sgd(loss_or_grads=None, params=None, learning_rate=0.001)[source]#

Stochastic Gradient Descent (SGD) updates

Generates update expressions of the form:

  • param := param - learning_rate * gradient

Parameters:
loss_or_grads: symbolic expression or list of expressions

A scalar loss expression, or a list of gradient expressions

params: list of shared variables

The variables to generate update expressions for

learning_rate: float or symbolic scalar

The learning rate controlling the size of update steps

Returns:
OrderedDict

A dictionary mapping each parameter to its update expression

Notes

Optimizer can be called without both loss_or_grads and params in that case partial function is returned

Examples

>>> a = pytensor.shared(1.)
>>> b = a*2
>>> updates = sgd(b, [a], learning_rate=.01)
>>> isinstance(updates, dict)
True
>>> optimizer = sgd(learning_rate=.01)
>>> callable(optimizer)
True
>>> updates = optimizer(b, [a])
>>> isinstance(updates, dict)
True