Gaussian process

Machine Learning

Introduction

A Gaussian process is a probability distribution over functions, a stochastic process, such that the set of values of the functions evaluated at an arbitrary set of points, jointly have a Gaussian distribution. Gaussian processes have been widely studied. Some of the popular statistical models such as autoregressive moving average (ARMA), Kalman filter, and radial basis function networks can all be viewed as specialized Gaussian processes.

Prerequisites

To understand Gaussian processes, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

Problem setting

We will study Gaussian processes in the context of regression problems.

In regression, the goal of the predictive model is to predict a continuous valued output for a given multivariate instance. In this article, for simplicity, we will work with real-valued input observations.

Consider such an instance \( \vx \in \real^\ndim \), a vector consisting of \( \ndim \) features, \(\vx = [x_1, x_2, \ldots, x_\ndim] \).

We need to predict a real-valued output \( \hat{y} \in \real \) that is as close as possible to the true target \( y \in \real \). The hat \( \hat{ } \) denotes that \( \hat{y} \) is an estimate, to distinguish it from the truth.

In the standard linear regression model with Gaussian noise, the actual target \( y \) is related to the input \( \vx \) through some function \( f: \real^\ndim \to \real \) such that

$$ y = f(\vx) + \epsilon $$

where, \( \epsilon \) is zero-centered Gaussian noise with variance \( \sigma^2 \). This means, \( \epsilon \sim \Gauss(0,\sigma^2) \).

The predictive model is inferred over a collection of supervised observations provided as tuples \( (\vx_i,y_i) \) containing the instance vector \( \vx_i \) and the true target variable \( y_i \). This collection of labeled observations is known as the training set \( \labeledset = \set{(\vx_1,y_1), \ldots (\vx_\nlabeled,y_\nlabeled)} \). Typically, these examples are supposed to be independent and identically distributed random variables.

Definition

A collection of random variables is known as a Gaussian process (GP), if any finite number of those random variables have a joint Gaussian distribution.

Technically, a GP is a distribution over functions. This distribution is completely specified by its

  • mean function: \( m(\vx) = \expect{}{f(\vx)}\)
  • covariance function: \( k(\vx, \dash{\vx}) = \expect{}{(f(\vx) - m(\vx)) (f(\dash{vx}) - m(\dash{\vx}))} \)

We denote the distribution as

$$ f(\vx) \sim GP(m(\vx), k(\vx, \dash{\vx})) $$

Please share

Let your friends, followers, and colleagues know about this resource you discovered.

Subscribe for article updates

Stay up to date with new material for free.