How to deploy App on Amazon Web Service (AWS) EC2 instance (Part -2)

Original Source Here

How to deploy App on Amazon Web Service (AWS) EC2 instance (Part -2)

In the past, when data science was evolving, Devops guys were responsible for taking Machine Learning (ML) or Deep Learning (DL) models to production. Data Scientists were limited to doing research and creating model in Jupyter notebook, however, now trend is changing. Nowadays, companies are looking for full stack ML Engineer or Data Scientists. Therefore, it is the need of an hour to learn how to deploy models on production as a service or in one-word MLOps. These guys has good future. In present time, one should know how to take these models to cloud (AWS/GCP/Azure).

In my last blog PART-1, I have explained how to deploy model on AWS-EC2 instance. In that I explored each and every nook of the deployment.

Limitations in the last blog :-

1. Limited to deployment of only one App on an instance.

2. App was not deployed as a service, so whenever we are rebooting instance we have to manually restart our App.

3. On restart IP of the instance, IP address of the instance changes.

4. No SSL certification setup.

In this blog, we will try to reach from base to advance. If you understand this then Congrats, you know how to take your model to real world in a professional way.

In this blog, I will focus on following points :-

1. Deploy ML model (Spam Detection) and DL model (Image classifier using Resnet50 model).

2. Deploying ML as a service.

3. Using Elastic IP to avoid changing of IP address of instance on every restart.

4. Generation of SSL certification.

1. Deploying Models :-

While creating the EC2 instance (Detail in Part-1) I have opened ports namely :- 22, 80, 443, 8000, 8080, 8081, 8082.

ML Model (Spam Detection) and DL model (Image Classifier) –

  • Firstly, do SSH and connect to your local
  • Secondly, upload and (from here)
  • How to do SSH and upload file, go to my Part-1(link) of this series.
  • After uploading models and doing unzip, my SSH terminal looks like below :
SSH terminal

Update the port of the ImageClassifier and SpamClassifier App as below :-

Using Port 8081 for SpamClassifier App and Port 8082 for ImageClassifier App

Both the models have been successfully deployed as Flask App. Here, we won’t use Flask as a server because it is not scalable and can’t take even 100 parallel requests (for details see here).

2. Deploying ML as a service :-

Now, install nginx and gunicorn to run App on a server using below commands.

We want to run model as a service so that it will always run in the backend. For this use the following command and create socket file and service file for both the Apps.

  1. SpamClassifier App:-

2. ImageClassifier App :-

Now run the following commands, these commands will create a socket file in SpamClassifier and ImageClassifier.

Now we will enter the setup file of nginx using the below code. You will get the default file which is configuration file of nginx. Create there another file named as flask_app.

Now just restart the service using — sudo service nginx restart

So, our ML and DL apps both are working on public IP. You can test it using

XX.XX.XX.XX:8081 => This IP is for SpamClassifier App

XX.XX.XX.XX:8082 => This IP is for ImageClassifier App

3. Using Elastic IP to avoid changing of IP address of instance on every restart :-

As of now, we have our Public IP and apps are running on it but suppose due to any failure or because of any unforeseen circumstance, we have to shut down and start the instance again. After restart, AWS will allot us a new random IP for our EC2 instance, which definitely can affect our whole application. So, to avoid this we will allocate Elastic static IP to our instance which will remain constant.

Take following steps to allocate Elastic IP :-

  1. Select the instance which you want to work, then click on Actions > Networking > Manage IP Addresses

2. Click on Allocate an Elastic IP

3. Click on Allocate

4. Now we have to allocate newly created Elastic IP to our instance. Actions>Associate Elastic IP addresses

5. Select our Instance and then click associate

6. Now, go to instance, there we can see elastic IP has been allocated to our Instance. Which is fixed.

Update the config file of nginx by updating the Elastic IP with the old Public IP in the following manner :

Now, we have static IP which will not change even after restarting our instance.

4. Generation of SSL certification :-

If one is buying a domain from providers like godaddy, then he can get the SSL certification from provider only. SSL certification explains that one can run on https://….., that means its secure website with no potential threats.

Here, we are generating self signed SSL certificate by ourself in the following way.

Type the above-mentioned code in the SSH-terminal. Then it will ask to fill below details :-

Once it’s done, we have to update the ngix config file in the following way :-(Remember to check spelling error and put semicolon at the statement end) We are using port 443 as HTTPS runs on that port.

Now just add https:// in front of your public IP : https://XX.XX.XX.XX:8081

It will give potential threat warning as SSL certificates generated here as it’s self signed and no authority has checked it whether its secure or not.

Click on proceed to go to website.

My deployed apps look like below :-

  1. Spam Classifier App (https://XX.XX.XX.XX:8081)

2. Image Classifier App (https://XX.XX.XX.XX:8082)

Hope this series has helped you in deploying any type of Data Science model on EC2 instance, to show your creativity to the world. I believe in learning and sharing it to the community. Taking every reader one step ahead in my MLOps journey.

In the upcoming days will share more details about deployment which is an integral part of MLOps. In case of any question feel free to comment.

Note :-

  1. Link to my PART-1
  2. Link to my Github, where I have uploaded Apps used in my blog.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: