pymc.math.clip = <aesara.tensor.elemwise.Elemwise object>[source]#

Clip x to be between min and max.

Note that when x is equal to the boundaries, the output is considered to be x, so at these points, the gradient of the cost wrt the output will be propagated to x, not to min nor max. In other words, on these points, the gradient wrt x will be equal to the gradient wrt the output, and the gradient wrt min and max will be zero.

Generalizes a scalar Op to tensors.

All the inputs must have the same number of dimensions. When the Op is performed, for each dimension, each input’s size for that dimension must be the same. As a special case, it can also be one but only if the input’s broadcastable flag is True for that dimension. In that case, the tensor is (virtually) replicated along that dimension to match the size of the others.

The dtypes of the outputs mirror those of the scalar Op that is being generalized to tensors. In particular, if the calculations for an output are done in-place on an input, the output type must be the same as the corresponding input type (see the doc of ScalarOp to get help about controlling the output type)


-Elemwise(add): represents + on tensors x + y -Elemwise(add, {0 : 0}): represents the += operation x += y -Elemwise(add, {0 : 1}): represents += on the second argument y += x -Elemwise(mul)(np.random.random((10, 5)), np.random.random((1, 5))): the second input is completed along the first dimension to match the first input -Elemwise(true_div)(np.random.random(10, 5), np.random.random(10, 1)): same but along the second dimension -Elemwise(int_div)(np.random.random((1, 5)), np.random.random((10, 1))): the output has size (10, 5). -Elemwise(log)(np.random.random((3, 4, 5)))