Nearest neighbor classifier

Machine Learning

Introduction

All supervised classification models in machine learning work on a primary assumption — examples belonging to the same class must be similar. In fact, in the training phase, a classifier learns the most dominant similarities among examples of the same class, so that new examples can be checked for such similarities. The nearest neighbor classifier directly works off of this assumption. Given any unlabeled example, find its closest neighbors in the feature space and assign the majority label. Although simple, the nearest neighbor classifier is quite a strong classifier, albeit with some severe practical challenges.

Prerequisites

To understand the nearest neighbor model, we recommend familiarity with the concepts in

Follow the above links to first get acquainted with the corresponding concepts.

Problem setting

In classification, the goal of the predictive model is to identify the class that a particular given instance belongs to.

Consider such an instance \( \vx \in \real^N \), a vector consisting of \( N \) features, \(\vx = [x_1, x_2, \ldots, x_N] \).

We need to assign it to one of the \( M \) classes \( C_1, C_2, \ldots, C_M \) depending on the values of the \( N \) features .

Predictive model

As we described earlier, the nearest neighbor classifies an unlabeled example in two steps:

  1. Sort labeled examples from the training set based on their nearness to the given unlabeled example.
  2. Identify the majority label among top \( K \) nearest neighbors. This is the prediction.

This means, the unlabeled instance is assigned to the class that has the most representative examples amongst those that are closest to the given unlabeled example.

Training nearest-neighbor classifier

Naturally, the questions to ask is

  • How do we quantify nearness?
  • How many neighbors to consider for the prediction? In other words, what is the value of \( K \)?

Barring answering these important questions, the training phase is almost non-existent for the nearest-neighbor classifier because the nearest neighbors are identified when the unlabeled example is presented to the model. There is no learning as such, except the identification of hyperparameters and metrics that address the questions posed above.

Nearness

Nearness is quantified by calculating a distance or similarity metric such as the Euclidean distance, Mahalanobis distance, or the cosine similarity metric. As an example, the Euclidean distance between two examples \( \vx \) and \( \vx' \) is calculated as

$$ d_{ij} = ||\vx - \vx'||_2 = \sqrt{\sum_{n=1}^N \left(x_n - {x'}_n \right)^2} $$

This distance is computed to all the examples in the training set \( \set{(\vx_1, y_1) (\vx_2, y_2), \ldots, (\vx_L, y_L)} \). The distances are then sorted in their ascending order and the first \( K \) examples are chosen. (If using a similarity metric, for nearness, the similarity scores are sorted in the descending order and the first \( K \) examples are chosen).

Choosing \( K \)

The value of \( K \) plays a significant role in the performance of the nearest neighbor classifier.

When \( K \) equals 1, the prediction relies on only one neighbor and is very localized. Such a model will have a very high variance. High variance, because the prediction could be different if the nearest training example that lead to the current prediction was missing from the training set. Just changing few examples in the training set could lead to changed predictions for many unseen examples.

When the value of \( K \) is set very high, it relies on a significant fraction of the population, thereby rendering is less local. This may provide for significant support for the prediction of the unlabeled instance, but may create problems in sub-regions that have very few examples of the correct class, especially near class boundaries. It may also be a problem when dealing with imbalanced classes or rare classes, with very few training examples.

Thus, it is important to arrive at a value of \( K \) that is just right. As with all hyperparameter tuning in machine learning, we use cross-validation to arrive at a suitable value for \( K \). We try several candidate values for \( K \) and choose the one one with the highest cross-validation accuracy.

On that note, one interesting thing about the \( K=1 \) nearest neighbor classifier is that given infinite training data, their error rate is never more than twice the minimum achievable error rate of an optimal classifier. This means that if \( N \to \infty \), the error rate of the \( K=1 \) classifier is always less than or equal to that of a classifier that uses true class distributions for prediction [Cover and Hart, 1967].

K-nearest neighbor classifier demo

Let us understand the predictive model of \(K\)-nearest neighbor classifier on some data. The training phase is non-existent. Choose a value of \( K \) by adjusting the slider and then view the regions of the feature space assigned to each class. You may also adjust the number of categories for the classification problem.

KNN classifier

Note that the KNN-boundaries are nonlinear. As you reduce the value of \( K \), the boundaries get rough and as you increase the value, the boundaries get smoother. This happens because larger values of \( K \) ensure more support for point and that does not change as rapidly as boundaries that depend on fewer points.

Nearest neighbors for regression

In regression, the goal of the predictive model is to predict a continuous valued output for a given multivariate instance.

Consider such an instance \( \vx \in \real^N \), a vector consisting of \( N \) features, \(\vx = [x_1, x_2, \ldots, x_N] \).

We need to predict a real-valued output \( \hat{y} \in \real \) that is as close as possible to the true target \( y \in \real \). The hat \( \hat{ } \) denotes that \( \hat{y} \) is an estimate, to distinguish it from the truth.

The extension of nearest neighbor model to the regression problem is straightforward.

Instead of identifying the majority label among nearest neighbors, we choose the mean of the target variable of the nearest neighbors.

Thus, the predictive model for nearest neighbor regression is

$$ \yhat = \frac{1}{K} \sum_{\vx_i \in \mathcal{N}_K(\vx)} y_i $$

where, \(\mathcal{N}_k(\vx) \) is the set of \( K \) nearest neighbors of the unlabeled example \( \vx \).

Dealing with feature types

Note that the predictive model involves calculation of a distance or similarity metric. For the most popularly used metrics, such as Euclidean distance or cosine similarity, this is easy for binary and continuous features since both can be treated as real-valued features.

In the case of categorical features a direct metric score calculation is not possible. Therefore, we need to first preprocess the categorical variables using one-hot encoding to arrive at a binary feature representation.

Practical challenges

Despite being an easy to understand approach, the nearest neighbor classifier is quite challenging in practical applications.

  • The training phase is nonexistent. This may seem like a good thing, but it is not. That's because all the training examples need to be available to the model for identifying the nearest examples. Depending on the training set size, this may rule out several deployment scenarios, such as those in embedded systems or those where training data is proprietary.
  • Discovering the nearest examples during prediction time means the computational platform for the predictive model needs to powerful enough to quickly compute distances to all training examples to arrive at each prediction. Compare this to parametric classification approaches, where the predictive model is a simple calculation involving the unlabeled instance and the learnt parameters of the model.
  • In high-dimensional feature spaces, the nearest neighbors may actually be quite far from the unlabeled example, far enough that they should not be considered neighbors at all. In other words, the curse of dimensionality is a severe challenge for the nearest neighbor classifier due to the infeasibility of discovering substantially similar neighbors in high-dimensional spaces.

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Let's connect

Please share your comments, questions, encouragement, and feedback.