pymc.LKJCorr#

class pymc.LKJCorr(name, n, eta, *, return_matrix=False, **kwargs)[source]#

The LKJ (Lewandowski, Kurowicka and Joe) log-likelihood.

The LKJ distribution is a prior distribution for correlation matrices. If eta = 1 this corresponds to the uniform distribution over correlation matrices. For eta -> oo the LKJ prior approaches the identity matrix.

Support

Upper triangular matrix with values in [-1, 1]

Parameters:
ntensor_like of int

Dimension of the covariance matrix (n > 1).

etatensor_like of float

The shape parameter (eta > 0) of the LKJ distribution. eta = 1 implies a uniform distribution of the correlation matrices; larger values put more weight on matrices with few correlations.

return_matrixbool, default=False

If True, returns the full correlation matrix. False only returns the values of the upper triangular matrix excluding diagonal in a single vector of length n(n-1)/2 for memory efficiency

Notes

This is mainly useful if you want the standard deviations to be fixed, as LKJCholsekyCov is optimized for the case where they come from a distribution.

References

[LKJ2009]

Lewandowski, D., Kurowicka, D. and Joe, H. (2009). “Generating random correlation matrices based on vines and extended onion method.” Journal of multivariate analysis, 100(9), pp.1989-2001.

Examples

with pm.Model() as model:

    # Define the vector of fixed standard deviations
    sds = 3*np.ones(10)

    corr = pm.LKJCorr(
        'corr', eta=4, n=10, return_matrix=True
    )

    # Define a new MvNormal with the given correlation matrix
    vals = sds*pm.MvNormal('vals', mu=np.zeros(10), cov=corr, shape=10)

    # Or transform an uncorrelated normal distribution:
    vals_raw = pm.Normal('vals_raw', shape=10)
    chol = pt.linalg.cholesky(corr)
    vals = sds*pt.dot(chol,vals_raw)

    # The matrix is internally still sampled as a upper triangular vector
    # If you want access to it in matrix form in the trace, add
    pm.Deterministic('corr_mat', corr)

Methods

LKJCorr.dist(n, eta, *[, return_matrix])