Deploying PyTorch Models on NatML



Original Source Here

Deploying PyTorch Models on NatML

NatML is officially open-source software! We had a few options on how to publish the machine learning runtime (MLRT); but keeping in line with our vision to democratize ML for interactive media, we decided to open source the MLRT and shift our focus to NatML Hub.

Hub provides services that boost the value we can provide with the runtime. And one of the many ways that Hub does this is by simplifying the process of writing predictors for your own ML models. In this article, we will explore the process of doing just this, deploying the MobileNetv3 classifier architecture from TorchVision into our Unity app.

What are Predictors Anyway?

Developers coming from a machine learning background are familiar with an interesting fact that others might not realize: machine learning models rarely ever produce readily-usable outputs. Instead, they require additional algorithms to transform raw output data into more usable forms. As such, every model you might encounter in the wild typically has accompanying Python code or Jupyter notebooks implementing feature pre- and post-processing.

This process is a pain, so NatML is designed to completely abstract it away by introducing a new concept: predictors. Predictors are lightweight primitives that make predictions with one or more ML models. As such, they perform all the required pre- and post-processing, so that developers only ever have to deal with familiar output types:

Every single model that runs on NatML — no matter what it does — is standardized to be used with predictors. This way, devs can very quickly write the same three lines of code to get going:

As you might have noticed, we are effectively shifting the responsibility of writing these predictors to the authors of the ML models. Most users can simply download ready-made predictors from Hub, but if you have a custom model, you will need to write a predictor for it. And while we already have guidelines on writing predictors, NatML Hub further simplifies the process of doing so. Let’s walk through an example.

Exporting the Model from PyTorch

NatML uses the ONNX format for all computation graphs because it is increasingly popular for ML deployment. Torch has a simple API to export an nn.Module to this format. Let’s use it in Google Colab:

Once we’ve exported the model, we can then create a NatML predictor for it. To do so, we’ll switch over to NatML Hub.

Creating the Predictor

Simply login to Hub, click on your username in the user menu, and click on “Create”. First, let’s fill out the model info and upload the model we just exported.

Now, let’s head over to the “Features” tab to add useful information about our model. Here, we’ll add the image normalization coefficients and upload the classification labels:

Hub provides this information to our code, so that we don’t have to! Next, let’s implement the predictor in C#. Thanks to Hub, we don’t have to start from scratch. The “API” tab allows us to specify what the predictor code will look like, then generates a template predictor package for us. The template includes the predictor, a sample scene, a sample script, and stubs for the readme and license.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: