Introduction
Ridge regression is a regularized version of linear least squares regression. It works by shrinking the coefficients or weights of the regression model towards zero. This is achieved by imposing a squared penalty on their size.
Ridge regression is a regularized version of linear least squares regression. It works by shrinking the coefficients or weights of the regression model towards zero. This is achieved by imposing a squared penalty on their size.
To understand ridge regression, we recommend familiarity with the concepts in
Follow the above links to first get acquainted with the corresponding concepts.
In regression, the goal of the predictive model is to predict a continuous valued output for a given multivariate instance.
Consider such an instance \( \vx \in \real^N \), a vector consisting of \( N \) features, \(\vx = [x_1, x_2, \ldots, x_N] \).
We need to predict a real-valued output \( \hat{y} \in \real \) that is as close as possible to the true target \( y \in \real \). The hat \( \hat{ } \) denotes that \( \hat{y} \) is an estimate, to distinguish it from the truth.
The predictive model of ridge regression is the same as that of linear least squares regression. It is a linear combination of the input features with an additional bias term.
\begin{equation} \hat{y} = \vx^T \vw + b \label{eqn:reg-pred} \end{equation}
where \( \vw \) are known as the weights or parameters of the model and \( b \) is known as the bias of the model. The parameters are an \(N\)-dimensional vector, \( \vw \in \real^N \), just like the input. The bias term is a real-valued scalar, \( b \in \real \).
Training a ridge regression model involves discovering suitable weights \( \vw \) and bias \( b \).
The training approach fits the weights to minimize the squared prediction error on the training data. Specifically in the case of ridge regression, there is an additional term in the loss function — a penalty on the sum of squares of the weights.
Suppose \( \labeledset = \set{(\vx_1, y_1), \ldots, (\vx_\nlabeled, y_\nlabeled)} \) denotes the training set consisting of \( \nlabeled \) training instances. If \( \yhat_\nlabeledsmall \) denotes the prediction of the model for the the instance \( (\vx_\nlabeledsmall, y_\nlabeledsmall) \), then the squared error over a single training example is
\begin{aligned} \ell(y_\nlabeledsmall, \yhat_\nlabeledsmall) &= \left( y_\nlabeledsmall - \yhat_\nlabeledsmall \right)^2 \\\\ &= \left(y_\nlabeledsmall - \vx_\nlabeledsmall^T\vw - b \right)^2 \end{aligned}
The overall loss over the training set is the sum of these squared errors and the penalty involving the sum of squares of the weights.
\begin{equation} \mathcal{L}(\labeledset) = \sum_{\nlabeledsmall=1}^\nlabeled \left(y_\nlabeledsmall - \vx_\nlabeledsmall^T \vw - b\right)^2 + \lambda \norm{\vw}{}^2 \label{eqn:ridge-loss} \end{equation}
Here, the hyperparameter \( \lambda \) controls the amount of penalty on the weights. Larger values of \( \lambda \) enforces strict reduction in the magnitude of the weight vector. Smaller values have the opposite effect of allowing weights with larger magnitudes.
The model parameters are fit to the training data by minimizing the loss above.
$$ \star{\vw} = \argmin_{\vw} \sum_{\nlabeledsmall=1}^\nlabeled \left(y_\nlabeledsmall - \vx_\nlabeledsmall^T \vw - b\right)^2 + \lambda \norm{\vw}{}^2 $$
Note that the loss function is a quadratic function. Therefore, its minimum always exists. Moreover, in the case of ridge regression, the solution is unique. This was not the case with vanilla linear least squares. We will study the reason for this in a bit.
We cover the general motivations behind coefficient penalization in more detail in our comprehensive article on regularization techniques. Here, we provide some intuition on coefficient shrinkage in the context of ridge regression.
An alternative way to express the loss in Equation \eqref{eqn:ridge-loss} is as follows:
\begin{align} \star{\vw} =& \argmin_{\vw} \sum_{\nlabeledsmall=1}^\nlabeled \left(y_m - \vx_\nlabeledsmall^T \vw - b\right)^2 \\\\ & \text{subject to } \sum_{\ndimsmall=1}^{\ndim} w_\ndimsmall^2 \le s \label{eqn:ridge-loss-alternate} \end{align}
This is similar in spirit to the loss in Equation \eqref{eqn:ridge-loss} because the size constraint \( s \) has a similar effect as that of \( \lambda \). Larger values of \( s \) allow coefficients with larger magnitudes, just like a smaller value \( \lambda \) would do. Smaller values of \( s \) would constrain the weights to smaller magnitudes, similar to a larger value of \( \lambda \).
Now, imagine if there was no size constraint. In other words, \( s = \infty \). What would happen? Some weights will become largely positive. To maintain the same loss value and predictive capability, some other weights will become increasingly negative to counter these weights and still lead to the same \( \yhat \). In fact, there is no limit to the variation in the weights that solve this minimization problem.
This means, with no constraint, a unique solution to the optimization problem cannot be guaranteed. This problem is prevented by imposing a size constraint, or regularization penalty on the weights.
Notice that the bias term has been left out of the penalty term of the loss in Equation \eqref{eqn:ridge-loss}.
It is natural to ask the question: If the bias term is also a parameter of the regression model, then why don't we regularize it?
Consider this thought experiment. If a constant term \( c \) is added to each of the target \( y_i\)'s, then the entire predictive model should shift accordingly. During training, the bias term will adapt to include this constant term, so that predictions from the trained model also reflect a constant addendum of \( c \).
Thus, intuitively, the bias term is centering the linear predictive model. Any constant added to all targets, merely shifts the center of the target variables, affecting the bias term.
Now, if we imposed a shrinking penalty on the bias term, it will be forced towards zero. It will be unable to model this centering effect.
Therefore, we do not penalize the bias term.
Unlike linear least squares regression, it is particularly important to preprocess input features for ridge regression.
$$ b^{(c)} = \frac{1}{\nlabeled} \sum_{\nlabeledsmall=1}^{\nlabeled} y_\nlabeledsmall $$
Why would this centering lead to the same solution as the original problem? Let's find out.
Here's the loss for ridge regression from Equation \eqref{eqn:ridge-loss}, written in its non-vectorized form.
\begin{align} \mathcal{L}(\labeledset) &= \sum_{\nlabeledsmall=1}^\nlabeled \left(y_\nlabeledsmall - \sum_{\ndimsmall=1}^\ndim x_{m\ndimsmall} w_\ndimsmall - b\right)^2 + \lambda \sum_{\ndimsmall=1}^\ndim w_\ndimsmall^2 \\\\ &= \sum_{\nlabeledsmall=1}^\nlabeled \left[y_\nlabeledsmall - \left(\sum_{\ndimsmall=1}^\ndim (x_{m\ndimsmall} - \bar{x}_\ndimsmall) w_\ndimsmall \right) - \sum_{\ndimsmall=1}^\ndim \bar{x}_{\ndimsmall} w_\ndimsmall - b\right]^2 + \lambda \sum_{\ndimsmall=1}^\ndim w_\ndimsmall^2 \\\\ &= \sum_{\nlabeledsmall=1}^\nlabeled \left[y_\nlabeledsmall - \left(\sum_{\ndimsmall=1}^\ndim (x_{m\ndimsmall} - \bar{x}_\ndimsmall) w_\ndimsmall^{(c)} \right) - b^{(c)}\right]^2 + \lambda \sum_{\ndimsmall=1}^\ndim \left(w_\ndimsmall^{(c)}\right)^2 \\\\ \label{eqn:ridge-loss-centering} \end{align}
Here, we have defined the centered coefficients and bias as
\begin{align} b^{(c)} &= \sum_{\ndimsmall=1}^\ndim \bar{x}_{\ndimsmall} w_\ndimsmall + b w_\ndimsmall^{(c)} &= w_\ndimsmall \end{align}
Clearly, \( w_\ndimsmall^{(c)} \) will minimize the loss with centered inputs exactly when \( w_\ndimsmall \) minimizes the uncentered one.
What about the centered bias \( b^{(c)} \)? What value minimizes the loss? Let's find out.
For that, we take the derivative of the loss with respect to \( b^{(c)} \) and set it to zero. That is
$$ \sum_{\nlabeledsmall=1}^\nlabeled \left[y_\nlabeledsmall - \left(\sum_{\ndimsmall=1}^\ndim (x_{m\ndimsmall} - \bar{x}_\ndimsmall) w_\ndimsmall^{(c)} \right) - b^{(c)}\right] = 0 $$
This implies, \( b^{(c)} = \frac{1}{\nlabeled} y_\nlabeledsmall = \bar{\vy}\), the average of all the target variables in the training set. Thus, if we center the target variables, we do not even have to include the bias in the loss, because, continuing from Equation \eqref{eqn:ridge-loss-centering}, we can substitute the solution of \( b^{(c)} \) to get
\begin{align} \mathcal{L}(\labeledset) &= \sum_{\nlabeledsmall=1}^\nlabeled \left[y_\nlabeledsmall - \left(\sum_{\ndimsmall=1}^\ndim (x_{m\ndimsmall} - \bar{x}_\ndimsmall) w_\ndimsmall^{(c)} \right) - \bar{\vy} \right]^2 + \lambda \sum_{\ndimsmall=1}^\ndim \left(w_\ndimsmall^{(c)}\right)^2 \\\\ &= \sum_{\nlabeledsmall=1}^\nlabeled \left[y_\nlabeledsmall^{(c)} - \left(\sum_{\ndimsmall=1}^\ndim (x_{m\ndimsmall} - \bar{x}_\ndimsmall) w_\ndimsmall^{(c)} \right) \right]^2 + \lambda \sum_{\ndimsmall=1}^\ndim \left(w_\ndimsmall^{(c)}\right)^2 \label{eqn:ridge-loss-centering-2} \end{align}
where, we have centered the target variable as \( y_\nlabeledsmall^{(c)} = y_\nlabeledsmall - \bar{\vy} \).
For the remaining analysis of ridge regression, we will assume that the input has been centered, so that the bias term can be ignored from the analysis.
With centering on the inputs, we can represent the inputs as a matrix \( \mX \), where, each row is a training instance \( \vx \). Here, \( \mX \in \real^{\nlabeled \times \ndim}\) is a matrix containing the training instances such that each row of \( \mX \) is a training instance \( \vx_\nlabeledsmall \) for all \( \nlabeledsmall \in \set{1, 2, \ldots, \nlabeled} \).
Also, the set of centered target variables can be represented as the vector \( \vy \), with the \( i \)-th element of \( \vy \) representing the target variable of the \( i \)-th row of \( \mX \). Thus, \( \vy \in \real^\nlabeled \) is a vector containing the target variables \( y_\nlabeledsmall \) for all \( \nlabeledsmall \in \set{1, 2, \ldots, \nlabeled} \).
With this notation, as we longer have to worry about the bias term, we can write the ridge regression loss function as
$$ \mathcal{L}(\labeledset) = \left(\vy - \mX\vw\right)^T (\vy - \mX\vw) + \lambda \vw^T\vw $$
Pretty convenient, isn't it?
In the matrix form, it is even easier to express the steps to find the solution to the ridge regression loss.
We just take the derivative of the loss with respect to the parameters \( \vw \) and set those to zero. This results in the following steps towards the solution to \( \vw \)
\begin{align} &\frac{\partial \loss(\labeledset)}{\partial \vw} = 0 \\\\ \implies& -2\mX^T(\vy - \mX\vw) + 2\lambda\vw = 0 \\\\ \implies& -\mX^T\vy + \mX^T\mX\vw + \lambda\vw = 0 \\\\ \implies& \left(\mX^T\mX + \lambda\mI\right)\vw = \mX^T\vy \\\\ \implies& \vw = \left(\mX^T\mX + \lambda\mI\right)^{-1}\mX^T\vy \end{align}
In the last step, we have taken the inverse of \( \left(\mX^T\mX + \lambda\mI\right) \). Even if \( \mX^T\mX \) is not full rank, the matrix \( \left(\mX^T\mX + \lambda\mI\right) \) is invertible. Because, adding some positive value \( \lambda \) along the diagonals of \( \mX^T\mX \), turns it into a nonsingular matrix, making it invertible. To under this better, refer to our comprehensive article on singular matrices.
In fact, avoiding the possibility of singular matrix \( \mX^T\mX \) was the primary motivation behind introducing the \( \lambda \) term in the solution refnum-singular. Compare this to the solution for vanilla linear least squares, wherein, we had to assume that the matrix \( \mX^T\mX \) is invertible. In the case of ridge regression, we make no such assumption, because it is invertible!
There, we have a closed form solution for the optimal coefficients of ridge regression model.
As you will see in this demo, the training is instantaneous due to the closed-form solution for the optimal value of the parameters that we arrived at in the previous section.
Note that increasing the value of \( \lambda \) increases the effect of regularization, leading to a reduction in the magnitude of the weight \( w \). This has the undesirable effect of slight increase in the value of sum of squared errors (SSE).
Note that the predictive model involves a dot product of the weight vector \( \vw \) and the instance vector \( \vx \). This is easy for binary and continuous features since both can be treated as real-valued features.
In the case of categorical features a direct dot product with the weight vector is not meaningful. Therefore, we need to first preprocess the categorical variables using one-hot encoding to arrive at a binary feature representation. a
An alternative to ridge regression is the lasso regression model, another regularized linear model for regression. To model nonlinear functions, a popular alternative is kernel regression.
Regression methods deal with real-valued outputs. For categorical outputs, it is better to use classification models such as logistic regression.
Help us create more engaging and effective content and keep it free of paywalls and advertisements!
Please share your comments, questions, encouragement, and feedback.