Google Cloud Platform: N-BEATS Component

https://miro.medium.com/max/1200/0*FvM5w_-FhPlF2I0F

Original Source Here

Google Cloud Platform: N-BEATS Component

Creating a custom container image for the N-BEATS deep learning model architecture

Photo by Mitchell Luo on Unsplash

Machine Learning Operations (MLOps) and Google Cloud

Machine learning (ML) operations, or MLOps, as it is better known, is the most important component in Enterprise AI. Without it, machine learning models and artificial intelligent (AI) systems will never reach beyond the experimental or proof-of-concept (PoC) stage. MLOps is the critical infrastructure that allows machine intelligent systems to deliver tangible business value to enterprise and their customers or clients.

Source: Neal Analytics

Cloud computing services, such as those offered by Google Cloud Platform (GCP), are enabling technologies that make the development of machine learning and AI systems simple, robust, and scalable.

In today’s ever-changing and competitive business environment, companies need to prioritize the digitization of their assets and the deployment of machine intelligent systems. The companies that are able to quickly experiment with new machine learning technologies and deploy scalable solutions using robust frameworks, efficiently, will be the market leaders in their respective segments.

The purpose of this article and associated work is to demonstrate the ease of which a new machine learning model can be quickly productionalized to perform iterative experimentation and scaleable deployment.

Objectives:
– Build a neural network model from a published article
– Experiment with the model on a use case to validate functionality
– Containerize and register the model to Google Cloud to future use

N-BEATS Model

The model here is the N-BEATS algorithm that was developed by Oreshkin et al. as presented in their 2019 article N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. It is a deep learning model for univariate time series point forecasting.

The model was recreated according to the article and the source code is available my GitHub repository: gcp-nbeats-component.

The Experiment

To demonstrate the capability of the model, I thought it would be fun to run an experiment where we try to forecast the price of Tesla (TSLA) stock one-day ahead using a window size of the previous 7-days of historical closing prices. Disclaimer: this is for experimental and demonstrative purposes. The model should not be used to make financial and investment decisions.

The data used is based on 10-years of historical TSLA daily stock closing prices from June 29, 2010 to February 3, 2020. The data is split into an 80/20 training / test set where the model is trained on the first 80% of data and validated on the remaining 20% of the data.

The data is then formatted to create window sizes of 7 days in order to predict the target value of the next-day stock price. The model is developed using the TensorFlow framework; therefore, the training and testing data is processed using the tf.data API to capitalize on performance efficiencies when training the model.

The result of the modelling experiment is a model capable of predicting the TSLA stock price with a mean absolute error of $7.69 on the testing set.

Model Results of N-BEATS Algorithm on TSLA Stock

Deploying the Machine Learning Model

Machine learning model experimentation, such as the one done in the previous section on TSLA data, is an important part of the machine learning development process, but now we need to quickly deploy this model and underlining code in a scalable system where we can actually use the results in our business process.

To demonstrate how this is done, I will use the Google Cloud Platform tools Cloud Build and Google Cloud Artifact Registry. Once registered, the model can be used in any ML pipeline, as-is. This is achieved using Kubeflow orchestration and Vertex AI for deployment.

Cloud Build brings reusability and automation to your ML experimentation by enabling you to reliably build, test, and deploy your ML model code as part of a CI/CD workflow. Artifact Registry provides a centralized repository for you to store, manage, and secure your ML container images. This will allow you to securely share your ML work with others and reproduce experiment results.

All of the deployment code as well as a walkthrough of how the model can be containerized and deployed to Artifact Registry is available in my GitHub repository: https://github.com/milank94/gcp-nbeats-component.

Once containerized and registered to Google Cloud, the N-BEATS model can now be used by anyone in the organization with access to the storage bucket for their own ML pipeline. They can use the code to build a model for any univariate time series problem, it does not have to be for the TSLA stock price predictor. For example, the deep learning model can be used to forecast sales, predict customer demand, or to predict future housing prices.

Final Remarks

Experimenting with new machine learning models or running proof-of-concept experiments for specific use cases is an important first step in the AI adoption journey. However, another important step is deploying the ML models and pipelines into a robust and scalable system.

MLOps frameworks and cloud computing platforms such as GCP are powerful tools to productionalize ML models and pipelines.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: