Linearized version of the approximation. Returns the cosine and sine bases and coefficients of the expansion needed to create the GP.

This function allows the user to bypass the GP interface and work directly with the basis and coefficients directly. This format allows the user to create predictions using pm.set_data similarly to a linear model. It also enables computational speed ups in multi-GP models since they may share the same basis.

Correct results when using prior_linearized in tandem with pm.set_data and pm.MutableData require that the Xs are zero-centered, so it’s mean must be subtracted. An example is given below.

Xs: array-like

Function input values. Assumes they have been mean subtracted or centered at zero.

(phi_cos, phi_sin): Tuple[array_like]

List of either Numpy or PyTensor 2D array of the cosine and sine fixed basis vectors. There are n rows, one per row of Xs and m columns, one for each basis vector.

psd: array_like

Either a Numpy or PyTensor 1D array of the coefficients of the expansion.


# A one dimensional column vector of inputs.
X = np.linspace(0, 10, 100)[:, None]

with pm.Model() as model:
    scale = pm.HalfNormal("scale", 10)
    cov_func =, period=1.0, ls=2.0)

    # m=200 means 200 basis vectors
    gp =, scale=scale, cov_func=cov_func)

    # Order is important.  First calculate the mean, then make X a shared variable,
    # then subtract the mean.  When X is mutated later, the correct mean will be
    # subtracted.
    X_mean = np.mean(X, axis=0)
    X = pm.MutableData("X", X)
    Xs = X - X_mean

    # Pass the zero-subtracted Xs in to the GP
    (phi_cos, phi_sin), psd = gp.prior_linearized(Xs=Xs)

    # Specify standard normal prior in the coefficients.  The number of which
    # is twice the number of basis vectors minus one.
    # This is so that each cosine term has a `beta` and all but one of the
    # sine terms, as first eigenfunction for the sine component is zero
    m = gp._m
    beta = pm.Normal("beta", size=(m * 2 - 1))

    # The (non-centered) GP approximation is given by
    f = pm.Deterministic(
        phi_cos @ (psd * beta[:m]) + phi_sin[..., 1:] @ (psd[1:] * beta[m:])

# Then it works just like a linear regression to predict on new data.
# First mutate the data X,
x_new = np.linspace(-10, 10, 100)
with model:
    model.set_data("X", x_new[:, None])

# and then make predictions for the GP using posterior predictive sampling.
with model:
    ppc = pm.sample_posterior_predictive(idata, var_names=["f"])