pymc.gp.MarginalApprox#

class pymc.gp.MarginalApprox(approx='VFE', *, mean_func=<pymc.gp.mean.Zero object>, cov_func=<pymc.gp.cov.Constant object>)[source]#

Approximate marginal Gaussian process.

The gp.MarginalApprox class is an implementation of the sum of a GP prior and additive noise. It has marginal_likelihood, conditional and predict methods. This GP implementation can be used to implement regression on data that is normally distributed. The available approximations are:

  • DTC: Deterministic Training Conditional

  • FITC: Fully independent Training Conditional

  • VFE: Variational Free Energy

Parameters:
mean_funcMean, default Zero

The mean function.

cov_func2D array_like, or Covariance, default Constant

The covariance function.

approxstr, default ‘VFE’

The approximation to use. Must be one of VFE, FITC or DTC.

References

  • Quinonero-Candela, J., and Rasmussen, C. (2005). A Unifying View of Sparse Approximate Gaussian Process Regression.

  • Titsias, M. (2009). Variational Learning of Inducing Variables in Sparse Gaussian Processes.

  • Bauer, M., van der Wilk, M., and Rasmussen, C. E. (2016). Understanding Probabilistic Sparse Gaussian Process Approximations.

Examples

# A one dimensional column vector of inputs.
X = np.linspace(0, 1, 10)[:, None]

# A smaller set of inducing inputs
Xu = np.linspace(0, 1, 5)[:, None]

with pm.Model() as model:
    # Specify the covariance function.
    cov_func = pm.gp.cov.ExpQuad(1, ls=0.1)

    # Specify the GP.  The default mean function is `Zero`.
    gp = pm.gp.MarginalApprox(cov_func=cov_func, approx="FITC")

    # Place a GP prior over the function f.
    sigma = pm.HalfCauchy("sigma", beta=3)
    y_ = gp.marginal_likelihood("y", X=X, Xu=Xu, y=y, sigma=sigma)

...

# After fitting or sampling, specify the distribution
# at new points with .conditional
Xnew = np.linspace(-1, 2, 50)[:, None]

with model:
    fcond = gp.conditional("fcond", Xnew=Xnew)

Methods

MarginalApprox.__init__([approx, mean_func, ...])

MarginalApprox.conditional(name, Xnew[, ...])

Returns the approximate conditional distribution of the GP evaluated over new input locations Xnew.

MarginalApprox.marginal_likelihood(name, X, ...)

Returns the approximate marginal likelihood distribution, given the input locations X, inducing point locations Xu, data y, and white noise standard deviations sigma.

MarginalApprox.predict(Xnew[, point, diag, ...])

Return the mean vector and covariance matrix of the conditional distribution as numpy arrays, given a point, such as the MAP estimate or a sample from a trace.

MarginalApprox.prior(name, X, *args, **kwargs)

Attributes

X

Xu

sigma

y