Introduction
Regularization is a collection of strategies that enable a learning algorithm to generalize better on new inputs, often times at the expense of reduced performance on the training set. In this sense, it is a strategy to reduce the possibility of overfitting the training data, and possibly reduce variance of the model by increasing some bias.
Some model families such as decision trees utilize regularization strategies that are specifically designed for their structure. Deep neural networks offer many alternative regularization strategies that we have explained in a comprehensive article focused on regularization in deep learning. Others, especially parametric models with weight vectors, may be regularized using norm-penalties on the weight vectors. In this article, we will cover these regularization strategies.