Eigendecomposition

Linear Algebra

Eigendecomposition appears repeatedly in machine learning, sometimes as the key step of the learning algorithm itself.

In this article, we provide a comprehensive overview eigenvalues, eigenvectors, and eigendecomposition — the process of decomposing a matrix into its eigenvalues and eigenvectors. Explore the interactive demos to get a deeper understanding of the material.

Prerequisites

To understand eigendecomposition, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

Eigenvectors and eigenvalues

We understood linear transformations resulting from a matrix multiplication.

What if the vector \( \vy \) resulting from \( \mA \vx \) was just a scaled version of the input vector \( \vx \), \( \mA \vx = \vy = \lambda \vx \)?

That means, \( \mA \vx = \lambda \vx \). This can be interpreted to mean that the matrix \( \mA \) is merely stretching or shrinking the vector \( \vx \). There is no rotation.

Of course, such an interesting transformation may not happen for any random pair of a matrix and a vector. So, the relationship between \( \mA \) and \( \vx \) must be special if \( \mA \vx = \lambda \vx \).

Such a special vector \( \vx \) is known as the eigenvector of the matrix \( \mA \). The stretch/shrinkage that it undergoes, \( \lambda \), is known as the eigenvalue, corresponding to that eigenvector.

Now trivially, if a vector \( \vx \) is an eigenvector for a matrix, then so is any scaled version of it \( \gamma \vx \). So, when one talks about eigenvectors, they are always referring to unit eigenvectors such that \( || \vx ||_2 = 1 \).

Properties of eigenvectors and eigenvalues

Here are some important facts to know about eigenvalues and eigenvectors

  • There could be between 0 and \( n \) eigenvalues and eigenvectors for an \( n \times n \) matrix.
  • Eigenvalues and eigenvectors are not defined for rectangular matrices.
  • Square symmetric matrices have real eigenvalues and a complete set of orthonormal eigenvectors.
  • If two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying their span are also eigenvectors with that eigenvalue.
  • All eigenvalues of a positive definite matrix are positive.
  • All eigenvalues of a positive semi-definite matrix are non-negative.

Eigendecomposition

Suppose a matrix \( \mA \in \real^{n \times n} \) is a square matrix with \( n \) linearly independent eigenvectors \( \vx_1, \ldots, \vx_n \) and corresponding eigenvalues \( \lambda_1,\ldots,\lambda_n \).

Let's define a new matrix \( \mQ \) to be a collection of these eigenvectors, such as each column of \( \mQ \) is an eigenvector of \( \mA \)

$$ \mQ = \begin{bmatrix} \vx_1 ~\ldots~ \vx_n \end{bmatrix} $$

Now,

$$ \mA \mQ = \begin{bmatrix} \mA \vx_1 ~\ldots~ \mA \vx_n \end{bmatrix} $$

And because \( \vx_i, \forall i \) are eigenvectors, \( \mA \vx_i = \lambda_i \vx_i \). Substituting

$$ \mA \mQ = \begin{bmatrix} \lambda_1 \vx_1 ~ \ldots ~ \lambda_n \vx_n \end{bmatrix} $$

This means \( \mA \mQ = \mQ \mLambda \), where we have let \( \mLambda \) denote diagonal matrix of eigenvalues.

$$ \mLambda = \begin{bmatrix} \lambda_1 & \ldots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \ldots & \lambda_n \end{bmatrix} $$

Since the matrix \( \mQ \) is invertible, (Do you know why?), we can multiply both sides of the equation with \( \mQ^{-1} \).

$$ \mA = \mQ \mLambda \mQ^{-1} $$

Here, \( \mathbf{\Lambda} \) is a diagonal matrix, such that the elements along the main diagonal are eigenvalues of \( \mA \). The matrix \( \mathbf{Q} \in \real^{n \times n}\) is a square matrix such that the \( i \)-th column of \( \mathbf{Q} \) is the eigenvector of \( \mA \) corresponding to the eigenvalue in the \( i \)-th row of \( \mathbf{\Lambda} \).

Any full-rank square matrix can be factorized this way. This factorization of the matrix into its eigenvalues and eigenvectors is known as the eigendecomposition of the matrix.

Eigendecomposition recovery demo

Check out this interactive demo to understand eigendecomposition visually. The column on the right shows the space transformed by \( \mA \). The bottom row shows the same transformation, arrived at by sequentially multiplying with SVD components. It is presented in the following order: \( \vx \rightarrow \mQ^{-1}\vx \rightarrow \mLambda\mQ^{-1}\vx \rightarrow \mQ \mLambda \mQ^{-1} \vx \).

Note how the first transformation \( \mQ^{-1}\vx \) mostly rotates the space into an orientation that is amenable to the appropriate stretch or shrinkage by the \( \mLambda \) in the second step. In the final step, the transformation matrix \( \mQ \), being an inverse of the first step, reverses the rotation from the first step.

Try to modify the matrix such that one of the elements of the diagonal matrix \( \mLambda \) becomes almost zero. Notice how the space gets squashed along that direction. Also, observe what happens when you end up with a zero eigenvalue in the \( \mLambda \) matrix.

Drag the cricle to change the matrix row vectors

Criteria for eigendecomposition

For real symmetric matrices, eigendecomposition is guaranteed. The Spectral Theorem states that the eigendecomposition of a real symmetric matrix leads to orthogonal eigenvectors \( \mA = \mQ \mLambda \mQ^T \).

When an eigendecomposition exits, it may not be unique. If there are duplicate eigenvalues, then any orthogonal vector in the span of the corresponding eigenvectors is also an eigenvector. Thus, an alternate \( \mQ \) can be chosen.

Using eigendecomposition for calculating matrix inverse

Eigendecomposition is one of the approaches to finding the inverse of a matrix that we alluded to earlier. If a matrix can be eigendecomposed, then finding its inverse is quite easy. Using properties of inverses listed before.

$$ \inv{\mA} = \inv{(\mathbf{Q}\mathbf{\Lambda}\inv{\mathbf{Q}})} = \mathbf{Q} \inv{\mathbf{\Lambda}} \inv{\mathbf{Q}} $$

We know from previous discussion that the inverse of the orthogonal matrix \( \mQ \) is cheap and it always exists.

Since \( \mathbf{\Lambda} \) is a diagonal matrix, its inverse, \( \inv{\mathbf{\Lambda}} \), can also be easily computed. It is merely another diagonal matrix whose entries are the reciprocals of the values along the diagonal of the original matrix.

$$ \mLambda^{-1} = \begin{bmatrix} \frac{1}{\lambda_1} & \ldots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \ldots & \frac{1}{\lambda_n} \end{bmatrix} $$

Now here's the interesting part. If any of the eigenvalues is zero, then the diagonal \( \mLambda \) is not invertible (its determinant is zero!), and by consequence, \( \mA \) is also not invertible.

Where to next?

You may now choose to explore other advanced topics linear algebra.

Already feeling like an expert in linear algebra? Move on to other advanced topics in mathematics or machine learning.

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Subscribe for article updates

Stay up to date with new material for free.