pymc.gp.HSGPPeriodic.prior_linearized#

HSGPPeriodic.prior_linearized(X)[source]#

Linearized version of the approximation. Returns the cosine and sine bases and coefficients of the expansion needed to create the GP.

This function allows the user to bypass the GP interface and work directly with the basis and coefficients directly. This format allows the user to create predictions using pm.set_data similarly to a linear model. It also enables computational speed ups in multi-GP models since they may share the same basis.

Correct results when using prior_linearized in tandem with pm.set_data and pm.MutableData require that the Xs are zero-centered, so it’s mean must be subtracted. An example is given below.

Parameters:
X: array-like

Function input values.

Returns:
(phi_cos, phi_sin): Tuple[array_like]

List of either Numpy or PyTensor 2D array of the cosine and sine fixed basis vectors. There are n rows, one per row of Xs and m columns, one for each basis vector.

psd: array_like

Either a Numpy or PyTensor 1D array of the coefficients of the expansion.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 10, 100)[:, None]

with pm.Model() as model:
    scale = pm.HalfNormal("scale", 10)
    cov_func = pm.gp.cov.Periodic(1, period=1.0, ls=2.0)

    # m=200 means 200 basis vectors
    gp = pm.gp.HSGPPeriodic(m=200, scale=scale, cov_func=cov_func)

    # Set X as Data so it can be mutated later, and then pass it to the GP
    X = pm.Data("X", X)
    (phi_cos, phi_sin), psd = gp.prior_linearized(X=X)

    # Specify standard normal prior in the coefficients.  The number of which
    # is twice the number of basis vectors minus one.
    # This is so that each cosine term has a `beta` and all but one of the
    # sine terms, as first eigenfunction for the sine component is zero
    m = gp._m
    beta = pm.Normal("beta", size=(m * 2 - 1))

    # The (non-centered) GP approximation is given by
    f = pm.Deterministic(
        "f",
        phi_cos @ (psd * beta[:m]) + phi_sin[..., 1:] @ (psd[1:] * beta[m:])
    )
    ...

# Then it works just like a linear regression to predict on new data.
# First mutate the data X,
x_new = np.linspace(-10, 10, 100)
with model:
    model.set_data("X", x_new[:, None])

# and then make predictions for the GP using posterior predictive sampling.
with model:
    ppc = pm.sample_posterior_predictive(idata, var_names=["f"])