# Matrix geometry

##### Linear Algebra

One way to look at our system of linear equations $\mA \vx = \vy$ is to think that the matrix $\mA$ is acting as a transformation on the vector $\vx$ and resulting in another vector $\vy$.

This is akin to vector scaling and rotation that we presented earlier.

Let's build some intuition about matrix as a transformation on a vector. Then, let us understand these results from a geometric perspective to understand the rotational, translational, and scaling properties of a matrix.

It will be crucial for understanding some complex matrix transformations and operations that we present later.

## Prerequisites

To understand the transformational natures of matrices, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

## The identity matrix transform.

Let us start with the identity matrix. We are interested in the product $\mI \vx = \vy$.

Expressing it in a verbose form, for let's say 2 dimensions, helps to understand what really happens.

\begin{aligned} & \mI \vx = \vy \\ & \implies \begin{bmatrix} 1 & 0 \\ 0 & 1 \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \\ & \implies \begin{bmatrix} 1x_1 + 0x_2 \\ 0x_1 + 1x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \\ & \implies \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} \\ & \implies x_1 = y_1 \text{ and } x_2 = y_2 \\ & \implies \vx = \vy \end{aligned}

This essentially means that multiplying by an identity matrix preserves the original vector.

##### Scaled identity matrix

What about multiplication by a scaled identity matrix, say $\alpha \mI$?

It is easy to extend the analysis above to note that $\vy = \alpha \vx$.

## Diagonal matrix transform on a vector

What about multiplying a general vector $\vx \in \real^n$ by a general diagonal matrix, say $\text{diag}(\alpha_1, \ldots, \alpha_n)$.

Again, easy to work out the math to note that each element of the resulting vector will be scaled by the corresponding factor in the diagonal matrix, so that $\vy = [y_1, \ldots, y_n]$ such that $y_i = \alpha_i x_i, \forall i \in \{1, \ldots, n \}$.

Check out the next interactive demo to understand this transformation effect of a diagonal matrix on an input vector. In this demo, we have constrained the matrix to be a diagonal matrix. Check out what happens when all diagonal elements of the matrix are the same or distinct. Check if you can scale, flip, and make the input vector span the entire space by just multiplying with a diagonal matrix.

Some key observations to check out.

• When both diagonal elements are exactly 1, then $\vy = \vx$.
• When the diagonal elements are the same, then $\vy = \alpha \vx$, where $\alpha$ is the value of the diagonal elements.
• When one diagonal is larger than the other, then $\vy$ is pulled towards that diagonal (in the positive quadrant).

So, the moral of the story is that multiplication with a diagonal matrix transforms the input vector. This may seem like a trivial transform, but it is a handy tool in implementing computationally efficient code. You would be surprised by the number of for-loops that can be avoided by simple diagonal matrix multiplication.

## Matrix as a transformation on space

Instead of checking the impact of a matrix on a single vector, wouldn't it be cool to check out the overall impact of a matrix on an entire collection of vectors, rather, the entire space?

This is what we do in the upcoming demos. We have represented the space with dots lined along concentric circles. Each dot represents the end point of a vector in that space.

The size of the dot is proportional to its distance from the center. Thus, the size of the dot is indicative of the magnitude of the vector it represents. (You know why!)

The color of the dot changes based on the angle the vector makes with the $X$-axis. So, all dots with the same color represent vectors that are at the same angle with the $X$-axis.

We start with demonstrating the effect of a diagonal matrix. You can modify a particular diagonal element of the matrix by dragging the corresponding axis vector. You will note the following:

• Changing the diagonal elements stretches or shrinks the space along the corresponding axis. There is no effect on the spacing of the dots along the other axis.
• The relative ordering of points remains the same, implying a linear transform.
• There is absolutely no rotation.
• Negating the values along a diagonal has the effect of flipping the axis and space.

## Orthogonal matrix transformation on space

Now that we know that diagonal matrices stretch and shrink the space, let's try to understand the impact of orthogonal matrices on the space.

Many of the important matrix factorizations and decompositions deal with orthogonal matrices. This demo will help you understand the implications of orthogonal transformations.

In the next demo, we will constrain the matrix to be an orthogonal matrix. That means that the rows vectors will be constrained to be orthonormal. Each of them is constrained to be a unit vector and the angle between them is constrained to be 90 degrees.

To enable this demo, you can drag only one vector in the demo. The other one adjusts itself to remain orthonormal to the first one.

Check out how an orthogonal matrix only rotates the input space. And it rotates the input space by the same amount as the matrix is rotated. Also note that there is absolutely no stretching or shrinking due to an orthogonal matrix.

## Upper Triangular matrix transform on space.

Upper triangular matrices are zero below the main diagonal.

In the next demo spin the row representing the $X$-axis (row 1 in the demo), a full 360 degrees. Observe how the space rotates along the $Y$-axis due to multiplication by an upper triangular matrix. There is no rotation along the $X$-axis. This is because the $Y$-axis does not participate in the transformation. Of course, except the stretching/shrinking/flipping along $Y$-axis, if you modified that.

## Lower Triangular matrix transform on space.

Lower triangular matrices are zero above the main diagonal.

I am sure you know what to expect in this case.

In the next demo spin the row representing the $Y$-axis (row 2 in the demo), a full 360 degrees. Observe how the space rotates along the $X$-axis due to multiplication by a lower triangular matrix.

As expected, there is no rotation along the $Y$-axis. The reason is same as before. The $X$-axis does not participate in the transformation. Of course, except the stretching/shrinking/flipping along $X$-axis, if you modified that.

## Symmetric matrix transform on space

Symmetric matrices are a very important family of matrices that appear very often in machine learning literature.

We know what influence diagonal and orthogonal matrices have on an input space. A symmetric matrix could sometimes be diagonal or orthogonal, so we already know what to expect in those situations.

In the next demo, we have constrained the first element of the second row vector to move in sync with the second element of first row vector. This ensures a symmetric matrix at all times.

Keep a watch on the dark blue dots lining the $X$-axis as you move the first row of the matrix. These points rotate along with that vector.

Along notice the light green dots. These always align with the second row vector.

Also note that when the two row vectors are aligned along the same plane, the space is squashed into a single dimension. So all points fall into a line and the line is along the same direction as the direction of the first row.

Every time the two row vectors align the space flips over, as the vectors cross each other.

Now stabilize the first row and see what happens when you move the second row along its constrained plane. You will notice that there is no more rotation. Instead, there is stretching, shrinking, and inverting across the plane set up by the first row.

It is almost like the first row vector is setting the direction (rotation) of the space and then the second dependent vector is acting as a diagonal matrix along that orientation.

Try out the demo to see if you can visualize these ideas.

## Positive definite matrix transform on space

A positive definite matrix is any symmetric matrix $\mA \in \real^{n \times n}$ such that $\vx^T \mA \vx > 0$. For a positive definite matrix, the following properties always hold.

• The diagonal elements are positive
• The non-diagonal elements of a positive definite matrix may be negative, subject to the next property.
• Each diagonal element is greater than the sum of the absolute values of the non-diagonal elements of that row (or column, due to symmetry).

How might such a matrix impact the underlying space?

Try out the next demo to find out.

Being a symmetric matrix, much of what you saw in the symmetric matrix transform will hold here.

With one big difference. The PD matrix never turns the space more than 90 degrees. Pay attention to the dark green points (or pick your favorite color) as you move. See how constrained they are in the transformed space. The angle between each vector $\vx$ in the original space and the output of the transform, $\mA \vx$ is always less than 90 degrees.

Thus, each point is constrained to an imaginary orthant around it. This is almost equivalent to multiplying by a positive number. That is one intuition behind positive definite.

Can you think about what will happen if the matrix were negative definite?

## Where to next?

With a sound understanding of matrix geometry, you are ready to explore other advanced topics in linear algebra.

Already feeling like an expert in linear algebra? Move on to other advanced topics in mathematics or machine learning.