Generative adversarial networks (GANs)

Deep Learning

Introduction

Generative adversarial networks (GANs) are one of the most impactful deep learning strategies of the 201X decade. Owing to their generic nature and strong performance, they have been applied in numerous domains for a variety of tasks. Originally proposed to be a strategy for building good predictive models, the research has evolved enough that we can now generate realistic fake data using GANs.

Prerequisites

To understand GANs, we recommend familiarity with the concepts in

  • Probability: A sound understanding of conditional and marginal probabilities and Bayes Theorem is desirable.
  • Introduction to machine learning: An introduction to basic concepts in machine learning such as classification, training instances, features, and feature types.

Follow the above links to first get acquainted with the corresponding concepts.

Intuition

Generative adversarial networks (GANs) are generative models that utilize a game-theoretic strategy to build better performant models. This is achieved by having two competing models within the network — a generator and a discriminator.

The generator attempts to create fake examples that should masquerade as being from the training data distribution. The discriminator attempts to identify the fake ones from the real ones. The discriminator emits a probability of the provided example being fake or real.

Over time, the generator begins to get better at fooling the discriminator and the discriminator gets better at catching the generator's fake examples. At convergence, the generator is expected to be strong enough that it can create realistic examples that are indistinguishable from real data. At this stage, the discriminator is confused and outputs \( \frac{1}{2} \) everywhere.

Zero-sum game

Consider a pizza-cutting scenario. If some person takes more than their share of pizza, the available pizza reduces for all other consumers of that pizza. In other words, the additional gain of pizza by one person leads to the exact overall loss of pizza available to others.

A game in which each participant's gain or loss is exactly balanced by the loss or gain of other players is known as a zero-sum game. In other words, if \( g_i \) denotes the gain of an \(i\)-th participant and \( l_j \) denotes the loss of the \( j \)-th participant, then, the sum of all loses subtracted from the sum over all gains is zero.

$$ \sum_{i} g_i - \sum_{j} l_i = 0 $$

Zero-sum games are widely studied concept in game-theory and economics. The loss-function of generative adversarial networks is based on the zero-sum game.

The generator

The generator can be any model capable of generating examples from the distribution of training data. In practice, generators are usually deep neural networks with architectures that are suitable for the application domain. The learning procedure for the generator trains a model that can adapt a random input into an example that can masquerade to be from the training data distribution.

Mathematically, a generator is a function \( g: \real^{\dash{\ndim}} \to \real^\ndim \), to convert some random input \( \vz \in \real^{\dash{\ndim}} \) into an example \( \vx \in \real^\ndim \).

$$ \vx = g(\vz; \mTheta^{(g)}) $$

where, \( \mTheta^{(g)} \) denotes the parameters of the generator model.

The discriminator

The discriminator is any model capable of classification that can output a probability score of an example being real versus fake. It is typically a deep neural network with a sigmoid activation on the output.

Mathematically, a discriminator is a function \( d: \real^\ndim \to [0,1] \), to predict the probability that the input example \( \vx \in \real^\ndim \) is real.

$$ \hat{y} = d(\vx; \mTheta^{(d)}) $$

where, \( \mTheta^{(d)} \) denotes the parameters of the discriminator model.

The GAN formulation

A simple way to formulate the GAN is a zero-sum game. Any loss for the discriminator is a gain for the generator and vice versa. Therefore, the payoffs in GANs are exactly balanced to model a zero-sum game.

The discriminator receives the payoff \(v\left(\mTheta^{(g)}, \mTheta^{(d)}\right) \).

The discriminator payoff is defined as

\begin{equation} v\left(\mTheta^{(g)}, \mTheta^{(d)}\right) = \expect{\vx \sim p_{\text{data}}}{ \log d(\vx) } + \expect{\vx \sim p_{\text{model}}}{\log \left(1 - d(\vx)\right)} \label{eqn:gan-payoff} \end{equation}

The first term, \( \expect{\vx \sim p_{\text{data}}}{ \log d(\vx) } \) is an expectation over available actual data. Therefore, \( \expect{\vx \sim p_{\text{data}}}{\cdot} \). We wish \( d(\vx) \) for all examples drawn from the training distribution \( \vx \sim p_{\text{data}} \) to be close to 1, implying an accurate detection of true examples. This will achieve a high value for the payoff.

The second term, \( \expect{\vx \sim p_{\text{model}}}{\log \left(1 - d(\vx)\right)} \) is an expectation over the fake data created by the generator. Therefore, \( \expect{\vx \sim p_{\text{model}}}{\cdot} \). For fake examples, we wish the discriminator to emit a probability of 0, implying an accurate detection of fake examples. This means, we want \( d(\vx) \) for all examples drawn from the generator distribution \( \vx \sim p_{\text{model}} \) to be close to 0. This will achieve a high value for the payoff since we are taking a logarithm of \( \left(1 - d(\vx) \right) \).

To counterbalance the discriminator payoff, the generator payoff is set to the negative of the discriminator payoff. This means, the generator payoff is \( -v\left(\mTheta^{(g)}, \mTheta^{(d)}\right) \).

With the payoffs set up this way, the training algorithm attempts to maximize the gain of the discriminator and minimize the loss of the generator. Therefore, at convergence, the final generator \( \star{g} \) is

\begin{equation} \star{g} = \argmin_{g} \left[ \maxunder{d} v(g,d) \right] \label{eqn:gan-convergence} \end{equation}

Training

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Let's connect

Please share your comments, questions, encouragement, and feedback.