Autoencoders

Deep Learning

Introduction

Autoencoders are neural networks that are optimized to convert the input into a version of itself through a hidden layer acting as a learned code. They are specifically designed to infer the useful characteristics of data by learning to ignore the noise and focusing on the relevant generalizable patterns. This is typically achieved by limiting the code to fewer degrees of freedom than the original input. For example, this could be performed by introducing a bottleneck hidden layer that is much smaller than the input layer and the output layer This constrains the code, the activations of the hidden layer, to focus only on the differentiating patterns in the input, as opposed to explaining all its idiosyncrasies.

Intuitively, they are an advanced form of dimensionality reduction approach, unlike linear transforms such as principal component analysis.

Prerequisites

To understand autoencoders, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

Intuition

Dimensionality reduction requires the representation of data in fewer dimensions than the original. How do we discover the limited dimensions that can faithfully reproduce the data? Useful data have patterns in them. If we can discover these patterns and encode them in fewer dimensions than the input, then we can attempt to reverse engineer or decode these reduced dimensional representation to faithfully recover the input. This is the primary goal of autoencoders — discover an intermediate coding language for a good reconstruction of the input.

$$ \text{input} \xrightarrow{\text{encode}} \text{code} \xrightarrow{\text{decode}} \text{input} $$

If we allow the code to have many degrees of freedom, it will conform to all the idiosyncrasies in the input and on decoding, quite possibly lead to the original input exactly. Such an autoencoder will have a very high capacity for conforming to the input, in effect merely memorizing the input. Not quite useful.

If we constrain the code to be much smaller than the input, we may not be able to exactly recover the input. But, we can expect a good code to abstract out the differentiating and salient patterns in the data, while ignoring the common noise or unimportant variations among the input examples. Such an encoder with a constrained code that is much smaller than the input is known as an undercomplete autoencoder. Much of the art in building an autoencoder involves choosing the appropriate level of undercompleteness to discover discerning and important patterns in the data.

A simple autoencoder

Achieving a simple undercomplete autoencoder is quite straightforward. We use a network architecture shaped like an hourglass — Starting at a wide input layer, create a stack of hidden layers that are narrower than the input to build the encoder portion of the network. The coding language is the activation of the final layer of the encoder, the narrowest hidden layer of the overall network, on an input. For the decoder portion, again widen the layers up to the output layer that is as wide as the input layer.

Consider an \( \ndim \)-dimensional input vector \( \vx \in \real^\ndim \). Let \( e: \ndim \to \nclass \) denote the encoder function and \( d: \nclass \to \ndim \) denote the decoder function, such that \( \nclass \) is much smaller than \( \ndim \). That is, \( \nclass \lll \ndim \).

$$ \vx \xrightarrow{\text{encode}} \vh = e(\vx) \xrightarrow{\text{decode}} \overset{\sim}{\vx} = d(e(\vx)) $$

Let the loss of this custom deep feedforward network be represented as \( \loss(\vx, d(e(\vx))) \). An example loss for training the autoencoder is the \(L_2\)-norm

$$ \loss(\vx, d, e) = \norm{\vx - d(e(\vx))}{2} $$

With this loss, training an autoencoder follows the same strategy as any deep feedforward network — gradient-based optimization, with gradients computed using backpropagation.

Sparse autoencoder

In some applications, we wish to introduce sparsity into the coding language, so that different input examples activate separate elements of the coding vector. Sparsity in the coding language can be achieved by regularizing the autoencoder with a sparsifying penalty on the code \( \vh \).

$$ \loss_{\text{sparse}}(\vx, d, e) = \loss(\vx, d, e) + \Omega(e(\vx)) $$

where, \( \Omega(\vh) \) is some sparsifying penalty on the code \( e(\vx) = \vh \). For example, we can use \( L_1 \)-norm.

Denoising autoencoder

As we mentioned earlier, a desirable property of an autoencoder is undercompleteness. This is typically achieved by constraining the size of the code by controlling the capacity of the network. An autoencoder with high capacity may just memorize the input, in effect learning an identity function to map the input to itself exactly. This is useless for most tasks.

We need ways to avoid learning the identity mapping. Undercompleteness is one way. An autoencoder can be made to learn something useful by deliberately introducing noise in the input and requiring the autoencoder to recover the original input. If \( \dash{\vx} \) denotes the input with some added noise, then, the loss function of the denoising autoencoder is

$$ \loss_{\text{denoising}}(\vx, d, e)) = \norm{\vx - d(e(\dash{\vx}))}{2} $$

With such a set up, it is unlikely to learn the identity mapping because the input and output are actually different. With a denoising encoder, we can actually hope that the autoencoder learns something useful.

Relation of autoencoders to PCA

We have covered principal component analysis (PCA) extensively before. Intuitively, PCA is a linear dimensionality reduction approach that works by finding the principal components or basis that lead to the least reconstruction error on the dataset.

Autoencoders are a nonlinear approach, again trying to discover the code that minimizes the reconstruction error of the data. In fact, if the decoder is linear and the loss function is mean squared error, an autoencoder infers the same subspace as the PCA. In other words, under these circumstances, the autoencoder will discover the principal subspace of the training data, much like the PCA.

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Let's connect

Please share your comments, questions, encouragement, and feedback.