pymc.Minibatch#

class pymc.Minibatch(data, batch_size=128, dtype=None, broadcastable=None, shape=None, name='Minibatch', random_seed=42, update_shared_f=None, in_memory_size=None)[source]#

Multidimensional minibatch that is pure TensorVariable

Parameters
data: np.ndarray

initial data

batch_size: ``int`` or ``List[int|tuple(size, random_seed)]``

batch size for inference, random seed is needed for child random generators

dtype: ``str``

cast data to specific type

broadcastable: tuple[bool]

change broadcastable pattern that defaults to (False, ) * ndim

name: ``str``

name for tensor, defaults to “Minibatch”

random_seed: ``int``

random seed that is used by default

update_shared_f: ``callable``

returns ndarray that will be carefully stored to underlying shared variable you can use it to change source of minibatches programmatically

in_memory_size: ``int`` or ``List[int|slice|Ellipsis]``

data size for storing in aesara.shared

Notes

Below is a common use case of Minibatch with variational inference. Importantly, we need to make PyMC “aware” that a minibatch is being used in inference. Otherwise, we will get the wrong \(logp\) for the model. the density of the model logp that is affected by Minibatch. See more in the examples below. To do so, we need to pass the total_size parameter to the observed node, which correctly scales the density of the model logp that is affected by Minibatch. See more in the examples below.

Examples

Consider we have data as follows:

>>> data = np.random.rand(100, 100)

if we want a 1d slice of size 10 we do

>>> x = Minibatch(data, batch_size=10)

Note that your data is cast to floatX if it is not integer type But you still can add the dtype kwarg for Minibatch if you need more control.

If we want 10 sampled rows and columns [(size, seed), (size, seed)] we can use

>>> x = Minibatch(data, batch_size=[(10, 42), (10, 42)], dtype='int32')
>>> assert str(x.dtype) == 'int32'

Or, more simply, we can use the default random seed = 42 [size, size]

>>> x = Minibatch(data, batch_size=[10, 10])

In the above, x is a regular TensorVariable that supports any math operations:

>>> assert x.eval().shape == (10, 10)

You can pass the Minibatch x to your desired model:

>>> with pm.Model() as model:
...     mu = pm.Flat('mu')
...     sigma = pm.HalfNormal('sigma')
...     lik = pm.Normal('lik', mu, sigma, observed=x, total_size=(100, 100))

Then you can perform regular Variational Inference out of the box

>>> with model:
...     approx = pm.fit()

Important note: :class:Minibatch has shared, and minibatch attributes you can call later:

>>> x.set_value(np.random.laplace(size=(100, 100)))

and minibatches will be then from new storage it directly affects x.shared. A less convenient convenient, but more explicit, way to achieve the same thing:

>>> x.shared.set_value(pm.floatX(np.random.laplace(size=(100, 100))))

The programmatic way to change storage is as follows I import partial for simplicity >>> from functools import partial >>> datagen = partial(np.random.laplace, size=(100, 100)) >>> x = Minibatch(datagen(), batch_size=10, update_shared_f=datagen) >>> x.update_shared()

To be more concrete about how we create a minibatch, here is a demo: 1. create a shared variable

>>> shared = aesara.shared(data)
  1. take a random slice of size 10:

    >>> ridx = pm.at_rng().uniform(size=(10,), low=0, high=data.shape[0]-1e-10).astype('int64')
    
  1. take the resulting slice:

    >>> minibatch = shared[ridx]
    

That’s done. Now you can use this minibatch somewhere else. You can see that the implementation does not require a fixed shape for the shared variable. Feel free to use that if needed. FIXME: What is “that” which we can use here? A fixed shape? Should this say “but feel free to put a fixed shape on the shared variable, if appropriate?”

Suppose you need to make some replacements in the graph, e.g. change the minibatch to testdata

>>> node = x ** 2  # arbitrary expressions on minibatch `x`
>>> testdata = pm.floatX(np.random.laplace(size=(1000, 10)))

Then you should create a dict with replacements:

>>> replacements = {x: testdata}
>>> rnode = aesara.clone_replace(node, replacements)
>>> assert (testdata ** 2 == rnode.eval()).all()

FIXME: In the following, what is the **reason* to replace the Minibatch variable with its shared variable? And in the following, the rnode is a new node, not a modification of a previously existing node, correct?* To replace a minibatch with its shared variable you should do the same things. The Minibatch variable is accessible through the minibatch attribute. For example

>>> replacements = {x.minibatch: x.shared}
>>> rnode = aesara.clone_replace(node, replacements)

For more complex slices some more code is needed that can seem not so clear

>>> moredata = np.random.rand(10, 20, 30, 40, 50)

The default total_size that can be passed to PyMC random node is then (10, 20, 30, 40, 50) but can be less verbose in some cases

  1. Advanced indexing, total_size = (10, Ellipsis, 50)

    >>> x = Minibatch(moredata, [2, Ellipsis, 10])
    

    We take the slice only for the first and last dimension

    >>> assert x.eval().shape == (2, 20, 30, 40, 10)
    
  2. Skipping a particular dimension, total_size = (10, None, 30):

    >>> x = Minibatch(moredata, [2, None, 20])
    >>> assert x.eval().shape == (2, 20, 20, 40, 50)
    
  3. Mixing both of these together, total_size = (10, None, 30, Ellipsis, 50):

    >>> x = Minibatch(moredata, [2, None, 20, Ellipsis, 10])
    >>> assert x.eval().shape == (2, 20, 20, 40, 10)
    
Attributes
shared: shared tensor

Used for storing data

minibatch: minibatch tensor

Used for training

Methods

Minibatch.__init__(data[, batch_size, ...])

Minibatch.all([axis, keepdims])

Minibatch.any([axis, keepdims])

Minibatch.arccos()

Minibatch.arccosh()

Minibatch.arcsin()

Minibatch.arcsinh()

Minibatch.arctan()

Minibatch.arctanh()

Minibatch.argmax([axis, keepdims])

See aesara.tensor.math.argmax.

Minibatch.argmin([axis, keepdims])

See aesara.tensor.math.argmin.

Minibatch.argsort([axis, kind, order])

See aesara.tensor.sort.argsort.

Minibatch.astype(dtype)

Minibatch.ceil()

Minibatch.choose(choices[, mode])

Construct an array from an index array and a set of arrays to choose from.

Minibatch.clip(a_min, a_max)

See aesara.tensor.math.clip.

Minibatch.clone()

Return a new, un-owned Variable like self.

Minibatch.compress(a[, axis])

Return selected slices only.

Minibatch.conj()

See aesara.tensor.math.conj.

Minibatch.conjugate()

See aesara.tensor.math.conj.

Minibatch.copy([name])

Return a symbolic copy and optionally assign a name.

Minibatch.cos()

Minibatch.cosh()

Minibatch.cumprod([axis])

Minibatch.cumsum([axis])

Minibatch.deg2rad()

Minibatch.diagonal([offset, axis1, axis2])

Minibatch.dimshuffle(*pattern)

Reorder the dimensions of this variable, optionally inserting broadcasted dimensions.

Minibatch.dot(right)

Minibatch.eval([inputs_to_values])

Evaluate the Variable.

Minibatch.exp()

Minibatch.exp2()

Minibatch.expm1()

Minibatch.fill(value)

Fill inputted tensor with the assigned value.

Minibatch.flatten([ndim])

Minibatch.floor()

Minibatch.get_parents()

Return a list of the parents of this node.

Minibatch.get_scalar_constant_value()

Minibatch.get_test_value()

Get the test value.

Minibatch.log()

Minibatch.log10()

Minibatch.log1p()

Minibatch.log2()

Minibatch.make_random_slices(...)

Minibatch.make_static_slices(user_size)

Minibatch.max([axis, keepdims])

See aesara.tensor.math.max.

Minibatch.mean([axis, dtype, keepdims, ...])

See aesara.tensor.math.mean.

Minibatch.min([axis, keepdims])

See aesara.tensor.math.min.

Minibatch.nonzero([return_matrix])

See aesara.tensor.basic.nonzero.

Minibatch.nonzero_values()

See aesara.tensor.basic.nonzero_values.

Minibatch.norm(L[, axis, keepdims])

Minibatch.ones_like([dtype])

Minibatch.prod([axis, dtype, keepdims, ...])

See aesara.tensor.math.prod.

Minibatch.ptp([axis])

See aesara.tensor.math.ptp.

Minibatch.rad2deg()

Minibatch.ravel()

Minibatch.repeat(repeats[, axis])

See aesara.tensor.basic.repeat.

Minibatch.reshape(shape[, ndim])

Return a reshaped view/copy of this variable.

Minibatch.round([mode])

See aesara.tensor.math.round.

Minibatch.rslice(total, size, seed)

Minibatch.searchsorted(v[, side, sorter])

Minibatch.set_value(value)

Minibatch.sin()

Minibatch.sinh()

Minibatch.sort([axis, kind, order])

See aesara.tensor.sort.sort.

Minibatch.sqrt()

Minibatch.squeeze()

Remove broadcastable dimensions from the shape of an array.

Minibatch.std([axis, ddof, keepdims, corrected])

See aesara.tensor.math.std.

Minibatch.sum([axis, dtype, keepdims, acc_dtype])

See aesara.tensor.math.sum.

Minibatch.swapaxes(axis1, axis2)

See aesara.tensor.basic.swapaxes.

Minibatch.take(indices[, axis, mode])

Minibatch.tan()

Minibatch.tanh()

Minibatch.trace()

Minibatch.transfer(target)

Transfer this this array's data to another device.

Minibatch.transpose(*axes)

Transpose this array.

Minibatch.trunc()

Minibatch.update_shared()

Minibatch.var([axis, ddof, keepdims, corrected])

See aesara.tensor.math.var.

Minibatch.zeros_like([dtype])

Attributes

RNG

T

broadcastable

The broadcastable signature of this tensor.

dtype

The dtype of this tensor.

imag

Return imaginary component of complex-valued tensor z Generalizes a scalar Op to tensors.

index

ndim

The rank of this tensor.

owner

real

Return real component of complex-valued tensor z Generalizes a scalar Op to tensors.

shape

size