Math#

This submodule contains various mathematical functions. Most of them are imported directly from pytensor.tensor (see there for more details). Doing any kind of math with PyMC random variables, or defining custom likelihoods or priors requires you to use these PyTensor expressions rather than NumPy or Python code.

Functions exposed in pymc namespace#

expand_packed_triangular(n, packed[, lower, ...])

Convert a packed triangular matrix into a two dimensional array.

logit(p)

invlogit

Logistic sigmoid function (1 / (1 + exp(-x)), also known as expit or inverse logit

probit(p)

invprobit(x)

logaddexp(*xs)

Logarithm of the sum of exponentiations of the inputs.

logsumexp(x[, axis, keepdims])

Compute the log of the sum of exponentials of input elements.

Functions exposed in pymc.math#

abs

|`a`|

prod(input[, axis, dtype, keepdims, ...])

Computes the product along the given axis(es) of a tensor input.

dot(l, r)

Return a symbolic dot product.

eq

a == b

neq

a != b

ge

a >= b

gt

a > b

le

a <= b

lt

a < b

exp

e^`a`

log

base e logarithm of a

sgn(a)

sign of a

sqr

square of a

sqrt

square root of a

sum(input[, axis, dtype, keepdims, acc_dtype])

Computes the sum along the given axis(es) of a tensor input.

ceil

ceiling of a

floor

floor of a

sin

sine of a

sinh

hyperbolic sine of a

arcsin

arcsine of a

arcsinh

hyperbolic arc sine of a

cos

cosine of a

cosh

hyperbolic cosine of a

arccos

arccosine of a

arccosh

hyperbolic arc cosine of a

tan

tangent of a

tanh

hyperbolic tangent of a

arctan

arctangent of a

arctanh

hyperbolic arc tangent of a

cumprod(x[, axis])

Return the cumulative product of the elements along a given axis.

cumsum(x[, axis])

Return the cumulative sum of the elements along a given axis.

matmul(x1, x2[, dtype])

Compute the matrix product of two tensor variables.

and_

bitwise a & b

broadcast_to(x, shape)

Broadcast an array to a new shape.

clip

Clip x to be between min and max.

concatenate(tensor_list[, axis])

Alias for `join`(axis, *tensor_list).

flatten(x[, ndim])

Return a copy of the array collapsed into one dimension.

or_

bitwise a | b

stack(tensors[, axis])

Stack tensors in sequence on given axis (default is 0).

switch

if cond then ift else iff

where

if cond then ift else iff

flatten_list(tensors)

constant(x[, name, ndim, dtype])

Return a TensorConstant with value x.

max(x[, axis, keepdims])

Returns maximum elements obtained by iterating over given axis.

maximum

elemwise maximum.

mean(input[, axis, dtype, op, keepdims, ...])

Computes the mean value along the given axis(es) of a tensor input.

min(x[, axis, keepdims])

Returns minimum elements obtained by iterating over given axis.

minimum

elemwise minimum.

round(a[, mode])

round_mode(a) with mode in [half_away_from_zero, half_to_even].

erf

error function

erfc

complementary error function

erfcinv

inverse complementary error function

erfinv

inverse error function

log1pexp

Compute log(1 + exp(x)), also known as softplus or log1pexp

log1mexp(x, *[, negative_input])

Return log(1 - exp(-x)).

logaddexp(*xs)

Logarithm of the sum of exponentiations of the inputs.

logsumexp(x[, axis, keepdims])

Compute the log of the sum of exponentials of input elements.

logdiffexp(exp - exp)

logit(p)

invlogit

Logistic sigmoid function (1 / (1 + exp(-x)), also known as expit or inverse logit

probit(p)

invprobit(x)

sigmoid

Logistic sigmoid function (1 / (1 + exp(-x)), also known as expit or inverse logit

softmax(c[, axis])

log_softmax(c[, axis])

logbern(log_p)

full(shape, fill_value[, dtype])

Return a new array of given shape and type, filled with fill_value.

full_like(a, fill_value[, dtype])

Equivalent of numpy.full_like.

ones(shape[, dtype])

Create a TensorVariable filled with ones, closer to NumPy's syntax than alloc.

ones_like(model[, dtype, opt])

equivalent of numpy.ones_like Parameters ---------- model : tensor dtype : data-type, optional opt : If True, we will return a constant instead of a graph when possible. Useful for PyTensor optimization, not for user building a graph as this have the consequence that model isn't always in the graph.

zeros(shape[, dtype])

Create a TensorVariable filled with zeros, closer to NumPy's syntax than alloc.

zeros_like(model[, dtype, opt])

equivalent of numpy.zeros_like Parameters ---------- model : tensor dtype : data-type, optional opt : If True, we will return a constant instead of a graph when possible. Useful for PyTensor optimization, not for user building a graph as this have the consequence that model isn't always in the graph.

kronecker(*Ks)

Return the Kronecker product of arguments:

cartesian(*arrays)

Makes the Cartesian product of arrays.

kron_dot(krons, m, *[, op])

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

kron_solve_lower(krons, m, *[, op])

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

kron_solve_upper(krons, m, *[, op])

Apply op to krons and m in a way that reproduces op(kronecker(*krons), m)

kron_diag(*diags)

Returns diagonal of a kronecker product.

flat_outer(a, b)

expand_packed_triangular(n, packed[, lower, ...])

Convert a packed triangular matrix into a two dimensional array.

batched_diag(C)

block_diagonal(matrices[, sparse, format])

See pt.slinalg.block_diag or pytensor.sparse.basic.block_diag for reference

matrix_inverse

Generalizes a core Op to work with batched dimensions.

logdet

Compute the logarithm of the absolute determinant of a square matrix M, log(abs(det(M))) on the CPU.