pymc.KroneckerNormal#

class pymc.KroneckerNormal(name, *args, rng=None, dims=None, initval=None, observed=None, total_size=None, transform=UNSET, default_transform=UNSET, **kwargs)[source]#

Multivariate normal log-likelihood with Kronecker-structured covariance.

\[f(x \mid \mu, K) = \frac{1}{(2\pi |K|)^{1/2}} \exp\left\{ -\frac{1}{2} (x-\mu)^{\prime} K^{-1} (x-\mu) \right\}\]

Support

\(x \in \mathbb{R}^N\)

Mean

\(\mu\)

Variance

\(K = \bigotimes K_i + \sigma^2 I_N\)

Parameters:
mutensor_like of float

Vector of means, just as in MvNormal.

covslist of arrays

The set of covariance matrices \([K_1, K_2, ...]\) to be Kroneckered in the order provided \(\bigotimes K_i\).

cholslist of arrays

The set of lower cholesky matrices \([L_1, L_2, ...]\) such that \(K_i = L_i L_i'\).

evdslist of tuples

The set of eigenvalue-vector, eigenvector-matrix pairs \([(v_1, Q_1), (v_2, Q_2), ...]\) such that \(K_i = Q_i \text{diag}(v_i) Q_i'\). For example:

v_i, Q_i = pt.linalg.eigh(K_i)
sigmascalar, optional

Standard deviation of the Gaussian white noise.

References

[1]

Saatchi, Y. (2011). “Scalable inference for structured Gaussian process models”

Examples

Define a multivariate normal variable with a covariance \(K = K_1 \otimes K_2\)

K1 = np.array([[1., 0.5], [0.5, 2]])
K2 = np.array([[1., 0.4, 0.2], [0.4, 2, 0.3], [0.2, 0.3, 1]])
covs = [K1, K2]
N = 6
mu = np.zeros(N)
with pm.Model() as model:
    vals = pm.KroneckerNormal('vals', mu=mu, covs=covs, shape=N)

Efficiency gains are made by cholesky decomposing \(K_1\) and \(K_2\) individually rather than the larger \(K\) matrix. Although only two matrices \(K_1\) and \(K_2\) are shown here, an arbitrary number of submatrices can be combined in this way. Choleskys and eigendecompositions can be provided instead

chols = [np.linalg.cholesky(Ki) for Ki in covs]
evds = [np.linalg.eigh(Ki) for Ki in covs]
with pm.Model() as model:
    vals2 = pm.KroneckerNormal('vals2', mu=mu, chols=chols, shape=N)
    # or
    vals3 = pm.KroneckerNormal('vals3', mu=mu, evds=evds, shape=N)

neither of which will be converted. Diagonal noise can also be added to the covariance matrix, \(K = K_1 \otimes K_2 + \sigma^2 I_N\). Despite the noise removing the overall Kronecker structure of the matrix, KroneckerNormal can continue to make efficient calculations by utilizing eigendecompositons of the submatrices behind the scenes [1]. Thus,

sigma = 0.1
with pm.Model() as noise_model:
    vals = pm.KroneckerNormal('vals', mu=mu, covs=covs, sigma=sigma, shape=N)
    vals2 = pm.KroneckerNormal('vals2', mu=mu, chols=chols, sigma=sigma, shape=N)
    vals3 = pm.KroneckerNormal('vals3', mu=mu, evds=evds, sigma=sigma, shape=N)

are identical, with covs and chols each converted to eigendecompositions.

Methods

KroneckerNormal.dist(mu[, covs, chols, ...])

Create a tensor variable corresponding to the cls distribution.