Types of tasks in machine learning

Machine Learning

Introduction

Machine learning is a broad field with a variety of approaches to addressing a gamut of tasks. In this article, we will describe some of the commonly addressed tasks using machine learning. We will also comment and point to suitable approaches for handling such tasks.

Prerequisites

To understand the variety of tasks in machine learning, we recommend familiarity with the concepts in

Follow the above link to first get acquainted with the corresponding concepts.

Classification

Classification is the task of assigning categories (or classes) to given instances automatically. The machine learning model that has been trained to achieve such a goal is known as a classifier. Classification falls in the realm of supervised learning — the sub-field of machine learning that enables models to be trained by observing labeled or supervised examples. For example, to learn a classifier to identify spam emails, each supervised example will be a tuple consisting of the email information (text, subject, from, to) and its category (spam or no spam).

Depending on the number of categories and their relationships, classification problems fall into several types.

  • Binary classification: An instance must belong to exactly one among two categories. The classifier itself is known as a binary classifier.
  • Multi-class classification: An instance must belong to exactly one among many (more than two) categories. In a multi-class scenario, the categories are mutually exclusive.
  • Multi-labeled classification: An instance may simultaneously belong to more than one category from among several categories. Thus, in a multi-labeled set up, the categories are not mutually exclusive.

Mathematically, we can express the classification problem as follows: If \( \vx \) denotes an \( \ndim\)-dimensional input instance, then the goal of classification is to assign \( \vx \) to the appropriate category (or categories, in the case of multi-labeled) from among \( \nclass \) categories \( \set{C_1, \ldots, C_\nclass} \), where \( \nclass \ge 2 \).

The classifier is trained over a collection of labeled observations provided as tuples \( (\vx_i,y_i) \) containing the instance vector \( \vx_i \) and the true target variable \( y_i \in \set{C_1,\ldots,C_\nclass}\). This collection of \( \nlabeled \) labeled observations is known as the labeled training set, or simply the training set, \( \labeledset = \set{(\vx_1,y_1), \ldots (\vx_\nlabeled,y_\nlabeled)} \).

Below, we list some of the most popular approaches to classification. Follow the corresponding links to study the classifiers in further detail.

The binary classifiers in the above list can be adapted to support multi-class or multi-labeled classification scenarios through the one-vs-one or one-vs-rest strategies.

Regression

Regression is the task of assigning a real-valued output to an input instance. For example, we may need to predict the selling price, a real number, of a house given its location, area, lot-size, number of bedrooms, bathrooms, and installed amenities. Just like classification, regression models are also trained using the supervised learning approach to machine learning.

Mathematically, we can express the regression problem as follows: If \( \vx \) denotes an \( \ndim\)-dimensional input instance, then the goal of regression is to predict a real-valued output \( \vy \in \real \) for the input \( \vx \).

The regression model is trained over a collection of supervised observations provided as tuples \( (\vx_i,y_i) \) containing the instance vector \( \vx_i \) and the true target variable \( y_i \in \real \). This collection of \( \nlabeled \) labeled observations is known as the training set, \( \labeledset = \set{(\vx_1,y_1), \ldots (\vx_\nlabeled,y_\nlabeled)} \).

In some problem settings, the output variable is also a multi-dimensional vector \( \vy \in \real^\nclass \). Such scenarios are known as multi-output regression problems.

Below, we list some of the most popular approaches to regression. Follow the corresponding links to study the regression models in further detail.

Clustering

Clustering involves the assignment of input instances into groups or clusters of similar instances. For example, we may wish to automatically group news items coming from disparate sources into clusters of related news to be summarized by a single headline. Clustering is a form of unsupervised learning scenario, one that does not involve the use of prior supervision or labels about the assignment of individual instances to various groups.

Mathematically, we can express the clustering problem as follows: Consider observations represented as vectors, for example \( \vx \in \real^\ndim \) — vectors consisting of \( \ndim \) features, \(\vx = [x_1, x_2, \ldots, x_\ndim] \). A collection of such observations is provided in the unlabeled set \( \unlabeledset = \set{\vx_1,\ldots,\vx_\nunlabeled} \). The goal of clustering is to automatically infer the groups of examples \( G_1, \ldots, G_\nclass \) such that the examples belong to a single group, say \( G_i \), are similar in some sense. The groups \( G_1, \ldots, G_\nclass \) are not pre-defined. They are automatically discovered as part of the clustering process.

Below, we list some of the most popular approaches to clustering. Follow the corresponding links to study the specifics of a clustering algorithm in further detail.

Density estimation

Density estimation is the task of modeling a probability density function for some feature space to facilitate the estimation of density of any given input instance. For example, given observations of past confirmed locations of underground reserves, a density estimator may be learnt that can provide the likelihood of an oil reserve at any given input location conditioned on the historical observations. Just like clustering, density estimation is also a form of unsupervised learning approach in machine learning.

Mathematically, we can express the density estimation problem as follows: Consider observations represented as vectors, for example \( \vx \in \real^\ndim \) — vectors consisting of \( \ndim \) features, \(\vx = [x_1, x_2, \ldots, x_\ndim] \). A collection of such observations is provided in the unlabeled set \( \unlabeledset = \set{\vx_1,\ldots,\vx_\nunlabeled} \). The goal of density estimation is to automatically infer the probability density function that aligns the best with the observations in \( \unlabeledset \), so that the \( p(vx) \) can be estimated most accurately for any instance \( \vx \) in that feature space.

Below, we list some of the most popular approaches to density estimation. Follow the corresponding links to study the specifics of a density estimation algorithm in further detail.

Dimensionality reduction

Dimensionality reduction, as the name implies, involves transforming an multivariate input instance to an output instance with fewer dimensions than the input, while retaining task-dependent relevant information in the instance. For example, we may wish to reduce a 10-dimensional dataset to a 2-dimensional dataset for easy visualization as a scatter plot, while maybe retaining the natural groupings among the input instances, even in the two-dimensional space. Dimensionality reduction may involve an unsupervised or a supervised learning strategy, depending on whether the reduced dimensions are arrived at by being informed with some categorical labels.

Mathematically, we can express the dimensionality reduction problem as follows: Consider observations represented as vectors, for example \( \vx \in \real^\ndim \) — vectors consisting of \( \ndim \) features, \(\vx = [x_1, x_2, \ldots, x_\ndim] \). In dimensionality reduction, we wish to transform these into output vectors with \( \nclass \) dimensions, \( \vy \in \real^\nclass \), such that \( \nclass \ll \ndim \).

Below, we list some of the most popular approaches to dimensionality reduction. Follow the corresponding links to study the specifics of a dimensionality reduction algorithm in further detail.

In addition to these, the approaches to clustering, that we saw earlier, can also be considered as dimensionality reduction approaches where the examples are reduced to a single dimension!

Please support us

Help us create more engaging and effective content and keep it free of paywalls and advertisements!

Subscribe for article updates

Stay up to date with new material for free.