Maxima, minima, and saddle points

Calculus

Much of machine learning is built around the idea of loss functions and optimizing for them. To understand optimization, we first need to build intuition about the maxima, minima, and so-called saddle points.

In this article, through interactive visualization on several example functions, we will build such an understanding.

Prerequisites

To understand this article on identifying maxima, minima, and saddle points, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

Taylor's Theorem for multivariate functions

The Taylor's theorem introduced earlier is also applicable to multivariate functions.

For a bivariate function, \( f: \real^2 \to \real \), the Taylor series expansion with 2 terms is

$$ f(x+a, y+b) = f(a,b) + x\dox{f}\bigg\rvert_{a,b} + y\doy{f}\bigg\rvert_{a,b} + \frac{1}{2}\left( x^2 \doxx{f}\bigg\rvert_{a,b} + xy\doxy{f}\bigg\rvert_{a,b} + xy \doyx{f}\bigg\rvert_{a,b} + y^2 \doyy{f}\bigg\rvert_{a,b} \right) $$

Here, \( g\bigg\rvert_{a,b} \) means that the derivative is evaluated at the point \( (a,b) \).

We will use this Taylor Theorem based approximation to identify the conditions for maxima, minima, and saddle points.

Conditions for minimum or minima

Suppose, the function has a minimum at some point \( (a,b) \).

Since a minimum is a critical point, this means the gradient of the function is zero at \( (a,b) \). Therefore, \(\dox{f}\bigg\rvert_{a,b} = 0 \) and \( \doy{f}\bigg\rvert_{a,b} = 0 \).

Because \( f(x+a,y+b) > f(a,b) \) for \( (a,b) \) to have a minimum, from the Taylor series expansion above, it also means,

$$ x^2 \doxx{f}\bigg\rvert_{a,b} + xy\doxy{f}\bigg\rvert_{a,b} + xy \doyx{f}\bigg\rvert_{a,b} + y^2 \doyy{f}\bigg\rvert_{a,b} > 0 $$

If \( \vx \) denotes the vector with the two coordinates \( (x,y) \), then \( \vx = [x,y] \). Using this notation, we can write the above relationship as

$$ \vx^T \mH\bigg\rvert_{a,b} \vx > 0 $$

where, \( \mH\bigg\rvert_{a,b} \) is the Hessian matrix — the matrix of second-order partial derivatives.

But wait! This equation,\( \vx^T \mH\bigg\rvert_{a,b} \vx > 0 \) implies that \( \mH\bigg\rvert_{a,b} \) is positive definite!

Thus, a function has a minimum at a point \( (a,b) \) if

  • Gradient at \( (a,b) \) is zero
  • Hessian at \( (a,b) \) is positive definite

Conditions for maximum or maxima of a function

We can arrive at these conditions using the same approach as before.

Suppose, the function has a maximum at some point \( (c,d) \).

Since a maximum is a critical point, this means the gradient of the function is zero at \( (c,d) \). Therefore, \(\dox{f}\bigg\rvert_{c,d} = 0 \) and \( \doy{f}\bigg\rvert_{c,d} = 0 \).

For the maximum or maxima to exist at \( (c,d) \), it should be the case that \( f(x+c,y+d) < f(c,d) \), for all \( (x,y) \in \real^2 \).

From the Taylor series expansion above, it also means,

$$ x^2 \doxx{f}\bigg\rvert_{c,d} + xy\doxy{f}\bigg\rvert_{c,d} + xy \doyx{f}\bigg\rvert_{c,d} + y^2 \doyy{f}\bigg\rvert_{c,d} < 0 $$

Using the same vector notation and Hessian notation as before, we note that the above inequality is equivalent to

$$ \vx^T \mH\bigg\rvert_{c,d} \vx < 0 $$

This means, \( \mH\bigg\rvert_{c,d} \) should be negative definite at the maximum.

Thus, a function has a maximum at a point \( (c,d) \) if

  • Gradient at \( (c,d) \) is zero
  • Hessian at \( (c,d) \) is negative definite

Conditions for saddle point

Well, what if the gradient of the function is zero at a point, but the Hessian is indefinite. This means, the point is a critical point, but it is neither a maximum or a minimum. Then such a point is a saddle point.

To summarize, a function has a saddle point at a point \( (c,d) \) if

  • Gradient at \( (c,d) \) is zero
  • Hessian at \( (c,d) \) is indefinite

A summary of the conditions

Just as a quick reference, here we summarize the 3 conditions.

So, if the gradient at a point \( (a,b) \) — \( \nabla\bigg\rvert_{a,b} = 0 \) — then the function has a

  • minimum at \( (a,b) \), if its Hessian \( \mathbf{H}\bigg\rvert_{a,b} \) at that point is positive definite.
  • maximum at \( (a,b) \), if its Hessian \( \mathbf{H}\bigg\rvert_{a,b} \) at that point is negative definite.
  • saddle point at \((a,b)\), if its Hessian \( \mathbf{H}\bigg\rvert_{a,b} \) at that point is indefinite.

This is very important.

To distinguish among maxima, minima, and saddle points, investigate the definiteness of the Hessian.

Gradient and Hessian of Multivariate Bowl

In the accompanying demo, we have plotted this gradient as follows. At each point on the 2-dimensional plane,

  • The arrows show the direction of the gradient at that point.
  • The color intensity shows the magnitude, the \(L_2\)-norm, of the gradient at that point.

Vector direction and norm were introduced earlier in our article on vector geometry in linear algebra.

We have also superimposed a red arrow indicating the gradient at the highlighted point. The arrow represents the vector \( [\dox{f} \doy{f}] \), with origin at the point \( (x,y) \). Notice that it always points in the direction of increasing function and it is longer for steep changes in the function.

For the symmetric multivariate bowl function, the gradient plot conveys the following information.

  • The function increases in all directions from the center.
  • The rate of growth of the function is symmetric around the center.
  • The function grows more rapidly as we move away from the center.

We have shown the Hessian information in a separate panel as follows: At every point on the 2-dimensional plane,

  • The \(+\) or \(-\) indicates if the Hessian is positive semi-definite (PSD) or negative semi-definite (NSD).
  • The Hessian plot has 3 colors based on definiteness of the Hessian at that point: Blue (negative definite), Green (positive definite), and Red (indefinite).

For the multivariate bowl, the Hessian is a positive semi-definite, in fact, positive definite every where in its domain. (Do you remember how to verify this?)

It has exactly one point where the gradient is zero, \( [0 \text{ } 0] \). And because the Hessian is positive definite, that point must be a minimum.

Easier compared to looking at all the charts and slices in various directions.

Multivariate bowl: gradient and Hessian demo

Gradient and Hessian: Rosenbrock's function

Appearances can be deceiving. Note that in the case of Rosenbrock's function, one might incorrectly assume that there is a maximum somewhere along the \(y\)-axis.

But studying the definiteness of the Hessian suggests otherwise. The Hessian is indefinite in that region. The minimum occurs in the region the Hessian is positive definite.

Rosenbrock's function: gradient and Hessian demo

Gradient and Hessian: Himmelblau's function

And now comes the interesting part.

You will note that for the Himmelblau's function, there is a patch in the center, where the Hessian is negative definite. That is the region of local maximum. This is analogous to the negative second-order derivative requirement for local maxima in the case of univariate functions.

For the other 4 critical points, the Hessian is positive definite, suggesting minima.

Himmelblau's function: gradient and Hessian demo

A function with a saddle

And finally, here's an example of a function with a saddle point.

$$ f(x,y) = 3x^2y + y^3 - 3x^2 - 3y^2 + 2 $$

From the gradient plot, you can see 4 critical points.

Note the Hessian definiteness plot to identify their nature.

  • Local minima at \( x = 0, y = 2 \): Hessian positive definite and gradient is zero.
  • Local maxima at \( x = 0, y = 0 \): Hessian negative definite and gradient is zero.
  • Saddle points at \( x = -1, y = 1 \) and \( x = 1, y = 1 \): Hessian indefinite, and gradient is zero.

Notice how the function plot is visually deceiving in terms of where the maximum, minimum, and saddle point lie. But the gradient and Hessian never deceive.

Always remember: Gradient helps in identifying critical points. Hessian helps in distinguishing between minima, maxima, and saddle points.

Saddle function: gradient and Hessian demo

Where to next?

Now that you understand multivariate calculus, understand the strategies for computing derivatives in programmatic implementations. Or choose some other topic from mathematical foundations.

Already a calculus expert? Check out comprehensive courses on machine learning or deep learning.

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Let's connect

Please share your comments, questions, encouragement, and feedback.