Supervised Learning — Classification, Regression, Generalizing, Overfitting, Underfitting

Original Source Here

Generalization, Overfitting and Underfitting

Evaluating these three concepts together would be more useful in terms of discussing their reasons. Generalization refers to effect/truth of the ability in real applications after training process. For instance, again, assume that dataset consists of cat images and dog images. Even if the images are rotated with different angles and/or flipped, human eyes still distinguish whether they are cats or dogs. Thanks to our perception, we can easily generalize that it is dog or cat. On the other hand, Machine Learning puts effort into it.

Customer database of boat company

In another example, a boat company wants to send advertising e-mail to people. The company also wants to predict which kind of people to send it to by training data in the customer database. When we look at the dataset, it is seen that people who buy boat are over 45 years old and have less than 3 children or are divorced. According to the dataset we have, accuracy of this model is 100%!

On the other hand, when we look at the ages of the people who buy boat, it is observed that the ages of the customers are 66, 52, 53, 58. In other words, it would not be a wrong proposition to put forward the thesis that people over 50 years old tend to buy boat. But when we look at the dataset, we can notice that someone who older than 50 years did not buy boat so that accuracy of this model is not 100%.

Now let’s compare two models that we prepared. First model that has conditions “over 45 years old and have less than 3 children or are divorced” is more complex than the second one. Although the accuracy of the complex model is 100%, it is not preferable for an algorithm because there is an overfitting situation in this complex model. If the model is formed close to the features of the training set, but cannot generalize with randomly added data from outside, overfitting occurs. Also, as seen in the example, the accuracy rate is very high in the dataset. Such models memorize, the dataset rather than learning it. Differently, if the model is too simple and the accuracy is too low, it is called underfitting.

Underfitting, Optimum, Overfitting

There are lots of methods to prevent overfitting and underfitting. In case of overfitting, following methods can be used to prevent:

Adding more data

Data augmentation


Removing some features from data

In case of underfitting:

Increasing the model complexity

Reducing regularization,

Adding features to training data

The more complex we allow our model to be, the better we will be able to predict on the training data. However, if our model becomes too complex, we start to focus too much on each data point in our training set and the model may not generalize well to new data. There is a sweet point that is the best generalization performance. This is the model we want to find.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: