Model Drift in Machine Learning*BqSqKe28QMQbdm7G

Original Source Here

Model Drift in Machine Learning

Understanding and Dealing with Model Drift

Photo by Aral Tasher on Unsplash

All things tend towards disorder. The second law of thermodynamics states “as one goes forward in time, the net entropy (degree of disorder) of any isolated or closed system will always increase (or at least stay the same)”. Thus, nothing lasts forever. Our youth is not forever, the best becomes the worst, and our machine learning models degrades as time does its thing.

The world is not static, it’s dynamic and continually changing. A spam email from the 2000s isn’t the same as a spam email in 2021. The features used to detect fraudulent emails in 2021 would differ significantly from those of the 2000s — people got smarter, including scammers. If we attempted to use a model developed in the year 2000 to classify whether emails from the year 2021 are fraudulent or not, we could expect to see the predictive power of the model worsen in comparison to a fraudulent email from the year 2000. This paradigm describes a concept known as model drift.

Model Drift is the decay of a model’s predictive power as a result of alterations in the environment. If thingsstayed the same, i.e. the environment and the data, we should expect our machine learning models predictive power to remain constant. However, we all know the real world is ever changing. The changes in a real world environment affect the relationships between variables thus making predictions made by the model worse off. For example, the changes in the way spam emails are presented will affect our machine learning model’s ability to detect spam emails.

Essentially, we should expect a machine learning model to lose predictive power because over time, things change. If we do not detect a drifting model on time, the effects can be severely detrimental to our pipeline, services, and/or business as a whole. But how do we know when a model is drifting? It starts by understanding the ways a model may drift.

The Different Types of Model Drift

To know how to deal with model drift, we must first understand the type of model drift we are dealing with and what is causing it.

Concept Drift

When we train a machine learning model on data, the model learns a function that maps the features to the target variable. As previously stated, if all things were static and nothing evolved over time, then we’d expect the relationship from the features to the target to hold true, thus the model should perform as it always has.

However, the reality is that things do change. The statistical properties of the target variables change, and so does the meaning. Therefore, the mapping that was learned by the machine learning model is no longer suitable for the new environment — i.e. the idea of what a spammer was has evolved over time.

Data Drift

With concept drift, the definition of a spammer changes — the statistical properties of the target variable change in unforeseen ways. Contrastingly, data drift refers to the change in the properties of the features. The underlying changes in the features cause the model to fail.

Data may drift because of:

  • Seasonality — regular and predictable changes in data that recur every calendar year
  • Consumer Preferences — what we prefer changes over time. What we like today may not be what we like tomorrow.
  • Etc.

Detecting Model Drift

Both concept and data drift are a response to statistical changes in the data. Thus, using techniques to monitor the statistical properties, the model’s predictions, and their correlation with other factors, does not seem like a bad idea.

For example, we can monitor the F1 score — depending on the problem — to determine whether our model is still performing as it should. Once the model drops below a specified F1 threshold, we can infer that our model is drifting. Or we can monitor the output of a prediction along with other features. For instance, if we see an increase or decrease in the number of fraudulent transactions at a rate that is severely different from that of active users, then we can infer that some drift may be occurring.

Overcoming Model Drift

Detecting model drift is equivalent to admitting you have a problem. Once you’ve realized there’s a problem, the next step is figuring out how to overcome the problem, and creating a plan. In the case of model drift, there are 2 popular plans for tackling the model drift problem.

The first method involves periodically retraining your machine learning model. This method is quite effective. If we are aware that a model degrades every 3 months, then a good idea would be to retrain the model every 2 months to ensure the performance of the model never falls below a predetermined threshold.

Another solution to address model drift is online learning. Online learning is a method of machine learning in which a model learns in real-time. The way it works is that data becomes available in a sequential manner and is used to update the best model which is being used for future data at each step.

Disclaimer: Executing online learning is extremely tricky but when it’s done correctly, the results that can be achieved are phenomenal.

Final Thoughts

As time goes on, most things deteriorate; our bones get weaker, bananas go bad, and machine learning models lose their predictive power. This is a concept known as model drift. Both types of model drift involved statistical changes within the data but detecting model drift is still a difficult task. Many startups have popped up over the years to tackle this problem, and teams over the world have come up with a number of ways to detect and overcome the issue.

Thanks for Reading!

If you enjoyed this article, connect with me by subscribing to my FREE weekly newsletter. Never miss a post I make about Artificial Intelligence, Data Science, and Freelancing.

Related Articles


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: