Linear Algebra was developed to solve systems of linear equations. So it is crucial we understand some basics about solving linear equations here.
Linear Algebra was developed to solve systems of linear equations. So it is crucial we understand some basics about solving linear equations here.
To understand systems of linear equations, we recommend familiarity with the concepts in
Follow the above links to first get acquainted with the corresponding concepts.
Suppose you wish to solve an equation in 2 unknown variables.
$$ 2x + 4y = 22 $$
Can you solve this to find the value of \( x \) and \( y \)? No, not unless you can have more information.
Suppose the additional information is provided in the form of another equation
$$ x + y = 6 $$
Great! Now you can solve it. How did you do it?
Mechanically, you went through these steps.
Note how you first eliminated \( x \) to arrive at an equation in one variable \( y \). This mechanical process is known as the process of Gaussian elimination.
Gaussian elimination is a general purpose strategy to solve equations in any number of unknown variables.
Check out the next interactive demo to see Gaussian elimination in action. Vary the number of equations and the number of unknowns to see the steps needed to arrive at a solution. Try to understand when the system is unable to solve the equations. Building intuition here is key to understanding much of linear algebra.
(Note: We are deliberately not doing any exception handling or dealing with infinity and NaNs. Check out when you get them to strengthen your intuition.)
If you have interacted with the demo above, you already know the answer. And also some pitfalls.
No, every set of equations cannot be solved. Here are the obvious outliers
In the first case, we have inadequate information to attempt to solve the problem. Many solutions will satisfy the given equations. But we will not be able to narrow down to the precise one.
In the second case, some equations might be inconsistent with the others. That means, the solution for a subset of equations will differ from another subset of equations. Hence, again we cannot precisely arrive at a single solution to the problem.
But what if the number of equations is the same as the number of unknowns. Let's take an example. Here are some equations in 2 variables.
$$ \begin{aligned} & 2x + 3y = 5 \\ & 4x + 6y = 10 \end{aligned} $$
If we were to do Gaussian elimination here, then to eliminate \( x \) from the second equation, we will subtract 2 times the first equation from the second to get \( 0x + 0y = 0 \) !
Not good! In our attempt to eliminate \( x \), we got rid of \( y \) too. We cannot solve for \( y \) anymore.
This happened because the second equation was a mere scaled up version of the first equation. In other words, the second equation, although seemingly different, conveyed the same information as the first one. It was not an independent piece of information.
What if we had more than 2, say 3, equations in 3 unknown variables? This is where things get interesting. Check out the next Gaussian elimination demo where we have purposely constrained the equations to have a certain nature. The last equation for all examples in this demo is a sum over scalar multiples of previous equations.
Here's what you will find. To be able to solve equations in \( n \) unknown variables, you need exactly \( n \) independent and consistent equations.
Enter your own linear equations. Compare your solution to ours.
Enter each equation on a separate line in the input box. Each equation should use the commonly used format such as \( 2x + 3y = 4 \). You may use any characters in the English alphabet, \( a \) through \( z \) as your variables. The constants may be integers or real-valued numbers.
A example set of equations has been provided.
Now that we understand how to solve linear equations, how about we come up with a compact representation for equations?
An astute reader will note that a linear equation such as \( 2x_1 + 4x_2 = 22 \) can be written in the form of vector dot products that we presented earlier.
$$ \va \cdot \vx = y $$
Here, \( \vx \in \real^2 \) is a vector containing the variables, \( \vx = [x_1,x_2] \). Also, \( \va \in \real^2 \) is another vector containing the constants, \( \va = [2,4] \). And, \( y \) is the result.
This is a succinct representation and will be followed henceforth to represent equations.
What about independence of equations in terms of vectors? Continuing our discussion of independence, if one equation is a scaled version of another, then they are not independent. In vector notation, if \( \va_j = \alpha \va_i, i \ne j \), then the vectors are not independent. Conversely, if \( \va_j \ne \alpha \va_i, i \ne j \), then the vectors \( \va_i \) and \( \va_j \) are said to be linearly independent.
What if there are more than 2 equations and more than 2 variables? Let's suppose we have \( m \) equations and \( n \) variables. Easy, we create \( m \) vectors, each with \( n \) values. Each of the \( m \) equations results in \( m \) outputs.
Collectively, this can all be written as $$ \va_i \cdot \vx = y_i, \forall i = \{1,\ldots,m\} $$
What about independence for more than 2 equations? Just like we need 2 linearly independent vectors to solve for 2 unknowns, so do we need \( n \) linearly independent vectors to solve for \( n \) unknowns. Let's extend our formula of independence to more than 2 vectors.
So, if any of the \( m \) vectors can be written as a linear combination of the remaining set of vectors, \( \va_j = \sum_i \alpha_i \va_i, \forall i, i \ne j \), then the set of vectors are not linearly independent.
In plain terms, a group of \( m \) vectors is said to be linearly independent if none of them can be written as the linear combination of the rest.
And, to solve for a \( n \) unknowns, you need \( n \) linearly independent equations or vectors.
Now that we have a concise representation for a single equation as a vector, how about we come up with a composite representation over multiple vectors. Let's stack each vector in the set \( \{\va_1, \ldots, \va_m \}, \va_i \in \real^n, \forall i \) as follows and refer to the entire box by the symbol \( \mA \).
$$ \mA = \begin{bmatrix} \va_{1,1} & \ldots & \va_{1,n} \\ \vdots & \ddots & \vdots \\ \va_{m,1} & \ldots & \va_{m,n} \\ \end{bmatrix} $$
Here, \( \va_{i,j} \) represents the \( j \)-th element of the \( i \)-th vector.
This composite representation of multiple vectors is known as a matrix.
Each horizontal list of elements is known as a row of the matrix. For example, the elements \( [\va_{1,1}, \va_{1,2}, \ldots, \va_{1,n}] \) form the first row of the matrix \( \mA \).
Similarly, each vertical list of elements is known as a column of the matrix. For example, the elements \( [\va_{1,1}, \va_{2,1}, \ldots, \va_{m,1}] \) form the first column of the matrix \( \mA \).
Naturally, the diagonal, or the list of elements from the top left to the bottom right, are known as the diagonal of the matrix. For example, the elements \( [\va_{1,1}, \va_{2,2}, \ldots, \va_{n,n}] \) form the diagonal of the matrix \( \mA \).
So, now that the vectors of multipliers have been stacked, how about the vector of unknowns and outputs?
Easy, we define a new operation on the matrix, the so-called matrix-multiplication to make it as succinct as
$$ \mA \vx = \vy $$
How cool is that? Matrix multiplication works by taking a row from the matrix \( \mA \), say \( \va_i \) , and performing a dot product of \( \va_i \) with the column vector \( \vx \) of unknowns to arrive at the answer \( y_i \). Doing so over all the rows of the matrix leads to \( m \) outputs, thereby creating our output vector \( \vy \).
It should be noted that the number of results in \( \vy \) is the same as the number of rows of \( \mA \).
Gaussian elimination can be easily re-implemented for matrices as simple row-operations. Multiply a row with an appropriate scalar and subtract it from another row to eliminate a variable and continue to the solution as we demonstrated earlier.
What about the independence requirements we figured out earlier? Well, they still apply. More succinctly so. To solve equations in \( n \) variables you need \( n \) linearly independent rows and \( n \) linearly independent columns in the matrix \( \mA \).
Now that we have introduced the matrix, there is no looking back. Explore matrices (plural of matrix), their properties, their types, and the operations defined on them in our comprehensive articles on matrices.
If you already understand matrices, explore other advanced topics in linear algebra.
Already feeling like an expert in linear algebra. Move on to other advanced topics in mathematics or machine learning.
Help us create more engaging and effective content and keep it free of paywalls and advertisements!
Please share your comments, questions, encouragement, and feedback.