Continuous Integration & Experimentation



Original Source Here

Continuous Integration & Experimentation

How our Deep Learning Models Evolve

Photo by Jennifer Griffin on Unsplash

“The only constant in life is Change.” — Heraclitus

I’m a big fan of continuity.

The environment around us changes every second. Our bodies and minds continuously learn and adapt every time we experience something new. A new city, a new person, a new recipe.

Technology is another example of continuity. It advances so fast, continuously building on top of every new discovery. We learn, we adapt and we go one step further. Using past knowledge to discover new.

I’m also a big believer of continuous education, and I advise everyone around me to keep learning and educating themselves. If we don’t adapt and learn, we are left behind. It’s inevitable.

But that’s not what I want to talk about.

Today I want to talk about the importance of continuous experimentation and continuous integration. And about why it is important to invest in building a solid infrastructure that supports those concepts early on.

Introduction

Here at Predicto, we deal with stocks and cryptocurrencies. As you probably know, conditions in those markets change rapidly. What happened 1 week ago is already old news. Continuous learning and adapting is key.

We currently track more than 150 stocks from the US stock market plus several cryptocurrencies. We maintain at least 3 different Deep Learning models for each stock, which give us daily forecasts for the next 2 to 3 weeks.

Our goal is not to predict the future.

Our goal is to identify complex patterns, provide uncertainty and risk metrics and, finally, generate explainable forecasts that can give us hints about potential opportunities on a specific stock.

We want our models to always be up to date and absorb latest market conditions. This means that we need to regularly retrain and fine tune hundreds of them. It is also very important to continuously experiment with new ideas and adapt to the learnings fast.

Market doesn’t wait and opportunity windows are very narrow.

To accomplish those goals and streamline the process, we designed and implemented a general-purpose forecasting platform, datafloat.ai, that allows us to:

  • Perform model training at scale. We can train hundreds of models in a few hours.
  • Easy model experimentation with zero code. We can design a new deep learning model in a few minutes that can forecast anything we ask it to, with a few clicks.

Let’s dig into more detail.

Model training at scale

As mentioned earlier, we want our models to always be up to date and absorb latest market conditions. Currently, our retrain frequency is once every 2 months and this means we need to retrain/fine-tune more than 500 Deep Learning models. Each model involves many years of data, a large number of features and several layers.

Here is how this process works:

Our latest training environment is packed in a docker container living in a private container registry. It contains an agent ready to listen for new model training requests populated in a message queue. When we are ready to start the retraining process, we start scaling our container instances on the cloud in order to process requests in parallel.

After each model training, agents generate validation metrics and graphs that we can use to automatically validate the model (or manually inspect it later). Below you can see some of the generated graphs for one of our Facebook stock models as an example.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: