pymc.gp.HSGP.prior_linearized#

HSGP.prior_linearized(Xs)[source]#

Linearized version of the HSGP. Returns the Laplace eigenfunctions and the square root of the power spectral density needed to create the GP.

This function allows the user to bypass the GP interface and work with the basis and coefficients directly. This format allows the user to create predictions using pm.set_data similarly to a linear model. It also enables computational speed ups in multi-GP models, since they may share the same basis. The return values are the Laplace eigenfunctions phi, and the square root of the power spectral density.

Correct results when using prior_linearized in tandem with pm.set_data and pm.Data require two conditions. First, one must specify L instead of c when the GP is constructed. If not, a RuntimeError is raised. Second, the Xs needs to be zero-centered, so its mean must be subtracted. An example is given below.

Parameters:
Xs: array-like

Function input values. Assumes they have been mean subtracted or centered at zero.

Returns:
phi: array_like

Either Numpy or PyTensor 2D array of the fixed basis vectors. There are n rows, one per row of Xs and prod(m) columns, one for each basis vector.

sqrt_psd: array_like

Either a Numpy or PyTensor 1D array of the square roots of the power spectral densities.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 10, 100)[:, None]

with pm.Model() as model:
    eta = pm.Exponential("eta", lam=1.0)
    ell = pm.InverseGamma("ell", mu=5.0, sigma=5.0)
    cov_func = eta**2 * pm.gp.cov.ExpQuad(1, ls=ell)

    # m = [200] means 200 basis vectors for the first dimension
    # L = [10] means the approximation is valid from Xs = [-10, 10]
    gp = pm.gp.HSGP(m=[200], L=[10], cov_func=cov_func)

    # Order is important.
    # First calculate the mean, then make X a shared variable, then subtract the mean.
    #  When X is mutated later, the correct mean will be subtracted.
    X_mean = np.mean(X, axis=0)
    X = pm.Data("X", X)
    Xs = X - X_mean

    # Pass the zero-subtracted Xs in to the GP
    phi, sqrt_psd = gp.prior_linearized(Xs=Xs)

    # Specify standard normal prior in the coefficients.  The number of which
    # is given by the number of basis vectors, which is also saved in the GP object
    # as m_star.
    beta = pm.Normal("beta", size=gp._m_star)

    # The (non-centered) GP approximation is given by:
    f = pm.Deterministic("f", phi @ (beta * sqrt_psd))

    # The centered approximation can be more efficient when
    # the GP is stronger than the noise
    # beta = pm.Normal("beta", sigma=sqrt_psd, size=gp._m_star)
    # f = pm.Deterministic("f", phi @ beta)

    ...

# Then it works just like a linear regression to predict on new data.
# First mutate the data X,
x_new = np.linspace(-10, 10, 100)
with model:
    model.set_data("X", x_new[:, None])

# and then make predictions for the GP using posterior predictive sampling.
with model:
    ppc = pm.sample_posterior_predictive(idata, var_names=["f"])