# Data types

##### Linear Algebra

Linear algebra is all about data types, their properties, and operations that can be performed on them. Data types are containers for data. They are named according to their capacity.

• A scalar is a single value.
• A collection of scalars is a vector.
• A matrix is a collection of vectors.

We start our discussion with the most primitive of data types, the scalar.

## Scalar

A scalar is the most primitive data type. It is a container for a single value. It could be a constant or a univariate variable.

Although there is no accepted standard notation, it is a common practice to denote scalar constants with greek characters and scalar variables with English alphabets. For example, $\alpha, \beta, \gamma,$ and $\lambda$ are all scalar constants. $a, b, c, \ldots, x, y, z$ are all scalar variables.

Scalar variables can be real or complex. For machine learning, limiting our discussion to real-valued scalars will suffice. So, henceforth, $a \in \real$

Scalars support all basic arithmetic operations that you are already familiar with: addition, subtraction, multiplication, and division. Extended arithmetic operations such as roots, exponentiation, logarithms, and trigonometric functions are also supported.

In machine learning, most models are multivariate. But the vital tunable variables, the so-called hyper-parameters, are usually scalars. The success, downfall, and reproducibility of machine learning algorithms hinge on these hyper-parameters. So, scalars may be simple, but they are nothing to be scoffed at.

## Vector

A vector is a collection of scalars. It is a 1-dimensional data type.

To indicate that a vector is a bigger container than a scalar, the vector notation uses bold-faced characters. For example

$$\va = \begin{bmatrix} a_1 \\ \vdots \\ a_n \end{bmatrix}$$

Here $\va$ is a vector containing $n$ values. This is succinctly represented as $\va \in \real^n$.

A vertical vector, as represented in the equation above, is known as a column vector. The horizontal one is known as a row vector.

$$\vb = \begin{bmatrix} a_1 \ldots a_n \end{bmatrix}$$

## Transposing a vector

A row vector can be transformed into a column vector and vice-versa with the transpose operator. It is denoted by super-script $T$ on the vector being transposed. For example

$$\vb = \va^T$$

$$\va = \vb^T$$

Unless otherwise noted, in most machine learning literature, it is common practice to present equations and analyses in the column-vector form. In our series of articles, vectors will always be column-vectors.

## Matrix

Let's stack each vector in the set $\set{\va_1, \ldots, \va_m}$ such that $\va_i \in \real^n, \forall i$ as shown below and refer to the entire box by the symbol $\mA$, a matrix.

$$\mA = \begin{bmatrix} \va_{1,1} & \ldots & \va_{1,n} \\ \vdots & \ddots & \vdots \\ \va_{m,1} & \ldots & \va_{m,n} \\ \end{bmatrix}$$

Here, $\va_{i,j}$ represents the $j$-th element of the $i$-th vector.

This composite representation of multiple vectors is known as a matrix. It is a 2-dimensional data type.

Each horizontal list of elements is known as a row of the matrix. For example, the elements $[\va_{1,1}, \va_{1,2}, \ldots, \va_{1,n}]$ form the first row of the matrix $\mA$.

Similarly, each vertical list of elements is known as a column of the matrix. For example, the elements $[\va_{1,1}, \va_{2,1}, \ldots, \va_{m,1}]$ form the first column of the matrix $\mA$.

Naturally, the diagonal, or the list of elements from the top left to the bottom right, are known as the diagonal of the matrix. For example, the elements $[\va_{1,1}, \va_{2,2}, \ldots, \va_{n,n}]$ form the diagonal of the matrix $\mA$.

## Where to next?

Now that you understand the basic data types, delve deeper into more advanced topics in linear algebra to build intuition about the properties and operations on these data types. We recommend that you start with building a geometric intuition about vectors.