Eigendecomposition
Suppose a matrix \( \mA \in \real^{n \times n} \) is a square matrix with \( n \) linearly independent eigenvectors \( \vx_1, \ldots, \vx_n \) and corresponding eigenvalues \( \lambda_1,\ldots,\lambda_n \).
Let's define a new matrix \( \mQ \) to be a collection of these eigenvectors, such as each column of \( \mQ \) is an eigenvector of \( \mA \)
$$ \mQ = \begin{bmatrix} \vx_1 ~\ldots~ \vx_n \end{bmatrix} $$
Now,
$$ \mA \mQ = \begin{bmatrix} \mA \vx_1 ~\ldots~ \mA \vx_n \end{bmatrix} $$
And because \( \vx_i, \forall i \) are eigenvectors, \( \mA \vx_i = \lambda_i \vx_i \). Substituting
$$ \mA \mQ = \begin{bmatrix} \lambda_1 \vx_1 ~ \ldots ~ \lambda_n \vx_n \end{bmatrix} $$
This means \( \mA \mQ = \mQ \mLambda \), where we have let \( \mLambda \) denote diagonal matrix of eigenvalues.
$$
\mLambda = \begin{bmatrix}
\lambda_1 & \ldots & 0 \\
\vdots & \ddots & \vdots \\
0 & \ldots & \lambda_n
\end{bmatrix}
$$
Since the matrix \( \mQ \) is invertible, (Do you know why?), we can multiply both sides of the equation with \( \mQ^{-1} \).
$$ \mA = \mQ \mLambda \mQ^{-1} $$
Here, \( \mathbf{\Lambda} \) is a diagonal matrix, such that the elements along the main diagonal are eigenvalues of \( \mA \). The matrix \( \mathbf{Q} \in \real^{n \times n}\) is a square matrix such that the \( i \)-th column of \( \mathbf{Q} \) is the eigenvector of \( \mA \) corresponding to the eigenvalue in the \( i \)-th row of \( \mathbf{\Lambda} \).
Any full-rank square matrix can be factorized this way.
This factorization of the matrix into its eigenvalues and eigenvectors is known as the eigendecomposition of the matrix.