Bayesian linear regression

Machine Learning

Introduction

In this article, we will introduce a Bayesian analysis of the standard linear regression model with Gaussian noise. Compared to the linear least squares models for regression, in the Bayesian treatment, we do not find the optimal values of the parameters. Instead we perform inference on the Bayesian model to estimate the posterior distribution over weights. For predictions on test examples, we average over all possible parameter values, weighted by their posterior probability.

Understanding this model will be crucial to study the more general Gaussian processes for regression.

Prerequisites

To understand Bayesian linear regression, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

Problem setting

In regression, the goal of the predictive model is to predict a continuous valued output for a given multivariate instance. In this article, for simplicity, we will work with real-valued input observations.

Consider such an instance \( \vx \in \real^\ndim \), a vector consisting of \( \ndim \) features, \(\vx = [x_1, x_2, \ldots, x_\ndim] \).

We need to predict a real-valued output \( \hat{y} \in \real \) that is as close as possible to the true target \( y \in \real \). The hat \( \hat{ } \) denotes that \( \hat{y} \) is an estimate, to distinguish it from the truth.

In the standard linear regression model with Gaussian noise, the actual target \( y \) is related to the input \( \vx \) through some function \( f: \real^\ndim \to \real \) such that

$$ y = f(\vx) + \epsilon $$

where, \( \epsilon \) is zero-centered Gaussian noise with variance \( \sigma^2 \). This means, \( \epsilon \sim \Gauss(0,\sigma^2) \).

The predictive model is inferred over a collection of supervised observations provided as tuples \( (\vx_i,y_i) \) containing the instance vector \( \vx_i \) and the true target variable \( y_i \). This collection of labeled observations is known as the training set \( \labeledset = \set{(\vx_1,y_1), \ldots (\vx_\nlabeled,y_\nlabeled)} \). Typically, these examples are supposed to be independent and identically distributed random variables.

Bayesian model for linear regression

In the Bayesian model for linear regression CITE[rasmussen-226], we assume the function \( f(\vx) \) to be a linear function of its input.

$$ f(\vx) = \vx^T \vw $$

where, \( \vw \) is the real-valued vector representing the parameters of the model, the so-called weights of the model. It has the same dimensionality as the input. This means, \( \vw \in \real^\ndim \).

In the Bayesian approach, there is prior over the weights. In this analysis, we use a Gaussian prior with zero mean and the covariance matrix \( \mSigma \). This means,

$$ \vw \sim \Gauss(0, \mSigma) $$

Predictive model

In non-Bayesian approaches to linear regression, we typically choose a single value of the parameter vector \( \vw \) that best fits the available training data, subject to some loss. For example, in the linear least squares model, the best parameter setting, \( \star{\vw} \), is chosen to minimize the mean squared error over predictions to actuals on the training data. The predictions in the non-Bayesian schemes then is simply \( \star{f}(\vx) = \vx^T \star{\vw} \).

In the Bayesian approach, we do not discover a single value of the model parameters. We instead infer the posterior distribution over the weights, given the training set. That is, \( p(\vw | \labeledset) \). Then, the prediction on a single test instance \( \vx \) involves averaging over all possible parameter settings, weighted by their posterior probability.

So, in the Bayesian approach, the predictive model is

\begin{equation} \hat{y} = \int p(f(\vx) | \vw) p(\vw | \labeledset) d\vw \label{eqn:reg-pred} \end{equation}

Thus, to be able to make predictions, we need to infer \( p(\vw | \labeledset) \). We will arrive at this result in the upcoming sections.

Training

As mentioned earlier, training in non-Bayesian approaches amounts to finding a single best value of \( \vw \) by minimizing a loss, typically mean-squared error of the predictions to the actual target values in the training data.

In the Bayesian approach, we instead perform inference — the estimation of the posterior probability of the weights, \( p(\vw | \labeledset) \).

Using the Bayes rule, this can be calculated as

$$ p(\vw | \labeledset) = \frac{p(\vy_\labeledset | \mX_\labeledset, \vw) p(\vw)}{p(\vy_\labeledset|\mX_\labeledset)} $$

where,

  • \( p(\vw | \labeledset) \) is the posterior probability of the weights,
  • \( p(\vw) \) is the prior probability of the weights,
  • \( p(\vy_\labeledset | \mX_\labeledset, \vw) \) is the likelihood of the target variables given the input and the weights. For simplicity, we have combined all the actual target values into the vector \( \vy_\labeledset \) so that \( \vy_\labeledset = [y_1,\ldots,y_\nlabeled] \). Similarly, all inputs have been combined into a single matrix, \( \mX_\labeledset = [\vx_1,\ldots,\vx_\nlabeled] \).
  • \( p(\vy_\labeledset | \mX_\labeledset) \) is the marginal likelihood. Being a marginal, it does not depend on the model parameters. It is therefore a normalizing constant. It is given by, $$ p(\vy_\labeledset | \mX_\labeledset) = \int p(\vy_\labeledset | \mX_\labeledset, \vw) p(\vw) d\vw $$

To infer the posterior distribution, typically we do not have to calculate the normalizing constant. We inspect the product of the prior and the likelihood to identify the nature of the distribution. By completing appropriate terms, we can guess the normalizing constant in most cases.

The likelihood \( p(\vy_\labeledset | \mX_\labeledset, \vw) \)

Because the examples in the training set are IID, the likelihood factorizes over the examples the training set. This means,

\begin{aligned} p(\vy_\labeledset | \mX_\labeledset, \vw) &= \prod_{\nlabeledsmall=1}^{\nlabeled} p(y_\nlabeledsmall | \vx_\nlabeledsmall, \vw) \\\\ &= \prod_{\nlabeledsmall=1}^{\nlabeled} \frac{1}{\sqrt{2 \pi \sigma^2}} \textexp{- \frac{(y_\nlabeledsmall - \vx_\nlabeledsmall^T \vw)^2}{2\sigma^2}} \\\\ &= \frac{1}{(2 \pi \sigma)^{\nlabeled/2}} \textexp{- \frac{\sum_{\nlabeledsmall=1}^\nlabeled (y_\nlabeledsmall - \vx_\nlabeledsmall^T \vw)^2}{2\sigma^2}} \\\\ &= \frac{1}{(2 \pi \sigma)^{\nlabeled/2}} \textexp{- \frac{|\vy_\labeledset - \mX_\labeledset^T \vw|^2}{2\sigma^2}} \\\\ &= \Gauss(\mX_\labeledset^T \vw, \sigma^2 \mI) \end{aligned}

The penultimate step seemed familiar like the probability density function of a multivariate Gaussian. This allows for easy modeling of the likelihood as a Gaussian distribution for predictive purposes.

The posterior \( p(\vw | \labeledset) \)

Ignoring the normalizing constant, we can now write the posterior distribution as being proportional to the product of the likelihood and the prior.

\begin{aligned} p(\vw | \labeledset) &\propto p(\vy_\labeledset | \mX_\labeledset, \vw) p(\vw) \\\\ &\propto \Gauss(\mX_\labeledset^T \vw, \sigma^2 \mI) \Gauss(0, \Sigma) \\\\ &\propto \textexp{- \frac{(\vy_\labeledset - \mX_\labeledset^T \vw)^T(\vy_\labeledset - \mX_\labeledset^T \vw)}{2\sigma^2}} \textexp{- \frac{1}{2} \vw^T \mSigma^{-1} \vw} \\\\ &\propto \textexp{- \frac{1}{2}(\vw - \bar{\vw})^T \left( \frac{1}{\sigma^2} \mX_\nlabeled \mX_\nlabeled^T + \mSigma^{-1} \right) (\vw - \bar{\vw})} \end{aligned}

where, we have consumed all normalizing constants ( terms that do not involve \( \vw \)) into the proportionality and for simplicity, set

$$ \bar{\vw} = \frac{1}{\sigma^2} \left( \frac{\mX_\labeledset \mX_\labeledset^T}{\sigma^2} + \mSigma^{-1} \right)^{-1} \mX_\labeledset \vy_\labeledset $$

Note the form of the last equation in the expansion of \( p(\vw | \labeledset) \). It seems like another multivariate Gaussian, so that

$$ p(\vw | \labeledset) = \Gauss(\bar{\vw}, \inv{\mA}) $$

where, the covariance matrix \( \mA^{-1} \) represents

$$ \mA^{-1} = \frac{\mX_\labeledset \mX_\labeledset^T}{\sigma^2} + \mSigma^{-1} $$

The final predictive model

Now that we have inferred the posterior probability \( p(\vw | \labeledset) \), we can write the final predictive model of the Bayesian linear regression approach.

Starting at the predictive model defined in Equation \eqref{eqn:reg-pred}, we substitute the results for relevant probabilities to arrive that the result.

\begin{aligned} \hat{y} &= \int p(f(\vx) | \vw) p(\vw | \labeledset) d\vw \\\\ &= \Gauss(\vx^T\bar{\vw}, \vx^T\mA^{-1}\vx) \\\\ \label{eqn:blr-pred-final} \end{aligned}

Relationship to ridge regression

We have presented ridge regression in another article. It is a linear least squares model, with an additional \( L_2 \) regularization term on the weight parameters. The solution of the ridge regression model is \( \star{\vw}_{\text{ridge}} \), calculated as

$$ \star{\vw}_{\text{ridge}} = \left(\mX_\labeledset\mX_\labeledset^T + \lambda\mI\right)^{-1}\mX_\labeledset\vy_\labeledset $$

Although not a Bayesian approach, the solution of the ridge regression is similar to the maximum-a-posteriori (MAP) — the mode of the posterior distribution of the weights we found earlier. Being a Gaussian distribution, the mode is same as the mean. This means, the MAP estimate of \( \vw \) is \( \bar{\vw} \), that we calculated to be

$$ \star{\vw}_{\text{MAP}} = \bar{\vw} = \frac{1}{\sigma^2} \left( \frac{\mX_\labeledset \mX_\labeledset^T}{\sigma^2} + \mSigma^{-1} \right)^{-1} \mX_\labeledset \vy_\labeledset $$

Except the terms that explicitly model the noise distribution ( \( \sigma^2 \) ), it can be observed that the optimal solution \( \star{\vw} \) of ridge regression is quite similar to the MAP estimate \( \star{\vw}_{\text{ridge}} \) if the prior is isotropic Gaussian, that is, \( \mSigma^{-1} = \lambda \mI \).

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Subscribe for article updates

Stay up to date with new material for free.