Sharing your Machine Learning or Deep Learning Projects with Users with Gradio



Original Source Here

Sharing your Machine Learning or Deep Learning Projects with Users with Gradio

Create a user-friendly web-based UI to let users interact with your trained models

All images by author

As an AI specialist, you spend a lot of time building and training your machine learning or deep learning models. Once your model is trained to your satisfaction, the next logical step would be to let your users try it out. Typically this would mean building your dedicated UI (commonly web apps), or exposing your models using a REST API. However, this would mean you have to spend time building all these user interfaces.

Wouldn’t it be nice if there is a package that can automatically expose your model so that the user can just try it out quickly?

This is what this article will cover. I will discuss how you can use the Gradio package to bind your model(s) to a web so that users can easily interact with them.

There are quite a number of customizations that you can perform with Gradio, but this article will get you started with it quickly.

Installing Gradio

Gradio is a Python package that creates a web-based UI for interacting with your machine learning / deep learning models.

To install Gradio, use the pip command:

pip install gradio

Basic UI

Let’s create a very basic UI using Gradio to understand how it works. For this, I will use Jupyter Notebook.

First, define a function with a single parameter that returns a single value:

def do_something(name):
return f'Hello, {name}'

Instead of calling the function using code, we want to bind it to a web UI so that the user can pass in the input and get the output, all through the web interface. This is where Gradio comes in.

The following code snippet binds Gradio to the do_something() function:

import gradio as gra

app = gra.Interface(fn = do_something, # the function to bind to
title = 'Say hello to gradio', # title of the interface
inputs = 'text', # type of input(s)
outputs = 'text') # type of output(s)
app.launch()

text” represents the string type.

When you run the above cells, you will see the following output:

Jupyter Notebook will show the web UI where you can enter the name, click the Submit button, and then view the output on the right side of the UI:

If you do not want to view the Gradio UI within Jupyter Notebook, you can also copy the URL (http://127.0.0.1:7867 in this case) to view it in a separate web page.

Flagging

Notice that in the previous figure, there is a button labelled Flag. What is the use of this button? Well, it is a way for you to log the inputs and outputs if you find that the output returned by the function warrants further attention. When you click the Flag button, a new folder named flagged will be created in the same folder where you run your Jupyter Notebook. Within this folder, you will find a file named log.csv. The CSV file will record your inputs, outputs, and other details:

name,output,flag,username,timestamp
Lee,"Hello, Lee",,,2023-05-12 09:02:53.278676

The behavior of the Flag button is customizable.

Sharing Publicly

Gradio allows your web UI to be shared publicly. To do that, simply set the share parameter to True in the launch() function:

app.launch(share=True)

When you rerun the cell, you will now see that besides accessing the web UI locally, you can now also access it remotely:

Running on local URL:  http://127.0.0.1:7862
Running on public URL: https://ab5a7d485a8171b0b6.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades (NEW!), check out Spaces: https://huggingface.co/spaces

Now everyone can access your web UI created using Gradio:

Note that the hosted page is only valid for 72 hours.

Handling Multiple Inputs and Outputs

Now that you have a good understanding of how Gradio works, let’s explore it further. Instead of single input and single output, how about multiple inputs and outputs?

Say I modify the do_something() function to accept two parameters and return a tuple result:

def do_something(name, age):
return f'Hello, {name}', age

To bind Gradio to the modified function, you now need to set both the inputs and outputs parameters to a list, like this:

app =  gra.Interface(fn = do_something,
title = 'Say hello to gradio',
inputs = ['text','number'],
outputs = ['text','number'])
app.launch()

number” represents the numeric type

Gradio will now display the following UI. Type some values into name and age and click Submit:

Tabbed Interface

If you have several functions that you want to bind, you can group them into a Tabbed page using the TabbedInterface class. Here is an example that is pretty self-explanatory:

import gradio as gra

def do_something(name, age):
return f'Hello, {name}', age

def do_anotherthing(country):
return f'You are from {country}'

# Tab 1
app1 = gra.Interface(fn = do_something,
title = 'Interface 1',
inputs = ['text','number'],
outputs = ['text','number'])

# Tab 2
app2 = gra.Interface(fn = do_anotherthing,
title = 'Interface 2',
inputs = 'text',
outputs = 'text')

# grouping Tab 1 and 2
tabbed = gra.TabbedInterface([app1,app2],
['Tab 1','Tab 2'])
tabbed.launch()

When the web UI is first loaded, you will see the first interface — Tab 1:

Clicking on Tab 2 will display the second interface — Tab 2:

Working with Images

Perhaps one very cool feature of Gradio is its ability to work with images, not just text or numerical inputs. Here’s a function that takes in an image (in NumPy array) and returns it in grayscale:

from skimage.color import rgb2gray
import numpy as np

def display_image(img):
# return image to gray scale
return rgb2gray(img)

Using Gradio, you can bind it to this function using the gr.Image() object as the input and “image” as the output:

import gradio as gr

demo = gr.Interface(fn = display_image,
inputs = gr.Image(),
outputs = "image")
demo.launch()

When you run the above code snippets, you will see the following:

Drag and drop an image onto the left side of the UI and click Submit. You will see the gray scale version of the image on the right!

Image source: https://en.wikipedia.org/wiki/Taxi#/media/File:Cabs.jpg

Connecting to Deep Learning Models

In my previous article on HuggingFace, I talk about how to use pre-trained models to perform object detection:

This is a good time to use the pre-trained model together with Gradio. Instead of modifying the code to load a specific image, use Gradio so that your users can now simply drag-and-drop the desired image into Gradio, click a button, and get the image with the various objects detected!

First, let’s define the function to take in an image and use the pre-trained model to perform object detection:

from transformers import DetrImageProcessor, DetrForObjectDetection
from PIL import Image, ImageDraw
import requests
import torch
import numpy as np
import gradio as gr

# using the pre-trained model
image_processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50")
model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")

def detect_objects(image):
# convert image from NumPy array to PIL format
image = Image.fromarray(image)

# process the image
inputs = image_processor(images = image,
return_tensors = "pt")
outputs = model(**inputs)

# create the target size in the format of (height,width)
target_sizes = torch.tensor([image.size[::-1]])

# detect objects in image
results = image_processor.post_process_object_detection(outputs,
target_sizes = target_sizes,
threshold = 0.9)[0]
draw = ImageDraw.Draw(image)

for score, label, box in zip(results["scores"], results["labels"], results["boxes"]):
box = [round(i, 2) for i in box.tolist()]
# draw bounding box around object
draw.rectangle(box,
outline="yellow",
width=2)
# display the object label
draw.text((box[0], box[1]-10),
model.config.id2label[label.item()],
fill="white")
return image

The above function returns an image with all the detected objects drawn with bounding boxes, together with their labels.

Let’s bind Gradio with the detect_objects() function:

demo = gr.Interface(detect_objects, 
gr.Image(shape=(400, 300)), # indicate the size of image to be returned
"image")
demo.launch()

Run the above code snippets and drag-and-drop the cab image onto the left side of the UI. Once you click Submit, you will be able to see the annotated image on the right!

If you like reading my articles and that it helped your career/study, please consider signing up as a Medium member. It is $5 a month, and it gives you unlimited access to all the articles (including mine) on Medium. If you sign up using the following link, I will earn a small commission (at no additional cost to you). Your support means that I will be able to devote more time on writing articles like this.

Summary

Gradio is a handy tool to quickly make your models available for testing. Instead of writing the front end (or the backend) to allow users to access your models, Gradio very nicely binds your model with a well-designed UI so that users can quickly test and evaluate your models. In a future article, I will dive into more ways you can customize Gradio. Stay tuned!

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: