E2E Deep Learning: Serverless Image Classification



Original Source Here

E2E Deep Learning: Serverless Image Classification

Build an end-to-end deep learning model to classify real-world images using TensorFlow, Docker, and AWS Lambda with API Gateway

Photo by Benjamin Rascoe on Unsplash

A. Introduction

A.1. Background & Motivation

In the data science life cycle, deployment is considered as the end goal of the project where finally we can put our AI model into practice. We will deploy the model after evaluation and inputs from stakeholders to ensure that the solution can help to solve business problems. Hence, it is a necessary skill to have because only from doing this, we can gain feedback from users to refine the model and assess it for performance and impact. In other words, the ability to manage an end-to-end data science project is a must for any data scientist out there.

A.2. Objectives

Now, imagine we need to deal with lots of real-world clothes images and your responsibility is to create an automated image classifier for an e-commerce company. As a result, the challenge is not only to build a robust deep learning model, but also deploy it as a serverless app since we only focus on the business solution, not the heavy-lifting infrastructure that hosts the app. Luckily, the combination of AWS Lambda and API Gateway can be used for hosting serverless APIs.

In this project, we will learn together how to:

  • build a deep learning model to classify images using TensorFlow.
  • convert the model into a more size-efficient format using TensorFlow Lite.
  • deploy the model locally on our machine using Docker.
  • deploy the model as a REST API using AWS Lambda and API Gateway.

A.3. Table of Contents

  • Introduction > Objectives > Table of Contents
  • Model Training > The Image Dataset > Build the Model > Train the Model > Evaluate the Model
  • Model Conversion > Convert the Model > Use the Converted Model
  • Model Deployment > Lambda Function > Deploy Locally with Docker > Deploy on AWS

Since this tutorial article will be quite extensive, feel free to jump into some sections that suit your needs.

Prerequisites.
To follow along with this project, we expect you to have a basic understanding of how to build a deep learning model using TensorFlow, what Docker is and how it works, familiar with AWS terminologies and an AWS account to access its services.

B. Model Training

B.1. The Image Dataset

The dataset contains 3781 clothes images with the top 10 most popular categories, divided into the train, test, and validation sets. Table 1 shows the dataset summary for better understanding. We can access the data for free here [2].

Table 1. The dataset summary about clothes images used for training the deep learning model.

To see the images using Python, we can use matplotlib.pyplot.gcf() class to get the current figure and set it to have a specific number of rows and columns. Hence, in each row and column, we can put an image as a subplot.

The samples from image datasets for each of the 10 classes.

B.2. Build the Model

We will build a deep learning model using transfer learning method and image augmentation to achieve a good performance and prevent overfitting. The pre-trained model we use is InceptionV3, but feel free to experiment. Keras has model definition built-in for InceptionV3. We will use (150,150, 3) as the desired input shape image, without including the fully connected layer at the top, and use the local weight that we we can download here. Import the class and instantiate it by specifying the mentioned configurations as follows:

Defining the pre-trained InceptionV3 model.
The summary of the pre-trained model.

As we can see, each layer has its own name, where the last layer’s name is mixed10 which has been convolved to 3 by 3. What’s interesting is that we can decide to move up the last layer to use a little more information. For instance, mixed7, with the output of 7 by 7. Hence, it’s find to experiment choosing the last layer for our needs.

Choosing the last layer of the pre-trained model for our needs.

We will define the new model that take the pre_trained_model of InceptionV3 into account to classify clothes images with 10 different categories. Here, we can build the last layers for the new model as follows:

Using transfer learning to re-build the pre-trained model for our objectives.

B.3. Train the Model

Now, we are ready to train the model. Notice that we normalize the image pixel values by dividing it with 255, set several parameters in the ImageDataGenerator to augment the input images for training to prevent overvitting, set the batch_size to be 32, and set the target image size to be (150, 150) to fit the model input shape.

Training the model using image augmentation for 100 epochs
Visualizing the model performance during training.

Congrats! We just build a robust deep learnig model using transfer learning and image augmentation. We achieved 90.59% on test accuracy with 0.273 on test loss and manage to avoid overfitting. In fact, our test accuracy is ~5% higher than the training accuracy, which is great!

B.4. Evaluate the Model

Let’s validate the model by making new predictions on unseen images. As we expected, the model works really well in making a correct prediction for each test image.

C. Model Conversion

After we build the model using TensorFlow, we will soon notice that the file size is too large and not optimized for deployment, especially on mobile or edge devices. This is where TensorFlow Lite (TFLite) comes into play. TFLite will help us convert the model to a more efficient format in .tflite. This will generate a small binary-sized model that is lightweight, low-latency, and having minor impact on accuracy.

C.1. Convert the Model

Here are the steps we need to do to make our best trained model to be converted to a tflite file:

  • load the model in h5 file,
  • instantiate a TFLiteConverter object from a loaded trained model,
  • convert and save the converted model in tflite file format.

C.2. Use the Converted Model

Once we converted the model into tflite file format, we can use it using TFLite interpreter to see how the model will perform in making a prediction before deploying it on mobile or edge device.

D. Model Deployment

In this final step, we will deploy the model using Docker, AWS Lambda, and AWS API Gateway. Firstly, we need to create a lambda_function.py to deploy the model either on AWS Lambda or Docker since both options need this file for a deep learning model to run.

D.1. Lambda Function

The lambda_function.py stores all the functions needed to run the app, starting from defining the interpreter, receiving the input image, preprocessing the image, and use the saved model to make the prediction.

D.2. Deploy Locally with Docker

We just created the lambda_fucntion.py. Next, we want to take and deploy it using AWS Lambda. For that, we will use Docker. AWS Lambda supports docker, so we can use a container image to deploy our function.

In this section, you will learn how to run the model locally using Docker within your machine.

D.2.1. Dockerfile

The next step is to create a Dockerfile. Dockerfile is a way for you to put all the dependencies you need for running the code into one single image that contains everything. A Docker image is a private file system just for your container. It provides all the files and code your container needs, such as:

  • installing the python package management system.
  • installing the pillow library to deal with image file.
  • installing the TensorFlow Lite tflite_runtime interpreter.
  • taking our model in tflite file and copy it to the docker image.
  • taking the lambda_function.py and copy it to the docker image.

What we need to do now is to run and build this docker image and running it locally.

D.2.2. Build the Docker Image

The followings are the steps we do to run the application locally:

Run the docker daemon. There are 2 ways to do this:

  • First option is to open cmd as administrator, then launch the following command: "C:\Program Files\Docker\Docker\DockerCli.exe" -SwitchDaemon
  • Second option is to run the Docker Desktop from the start menu and validate that the docker is in running state.

Build an image from a Dockerfile. A Docker image is a private file system just for your container. It provides all the files and code your container needs.One important note is that do not change the working directory in Dockerfile

$ docker build -t tf-lite-lambda .
  • The command above will build the image from the content of the folder you are currently in, with the tag name tf-lite-lambda.

D.2.2. Run the Container Image

Start a container based on the image you built in the previous step. Running a container launches your application with private resources, securely isolated from the rest of your machine.

$ docker run --rm -p 8080:8080 --name clothes-classifier tf-lite-lambda
  • The -p (stands for publish) indicates that we want to map the container port 80 to the host machine port 80. The container opens a Web server on port 80, and we can map ports on our computer to ports exposed by the container.
  • The --rm (stands for remove) indicates that we want to automatically remove the cotainer when it exists.
  • The --name gives a name to a new container, and tf-lite-lambda is the image name we use to create the container.

Here are the screenshots of the results from the previous commands:

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: