Monkey Pox Detection from Images

Original Source Here

Monkey Pox Detection from Images

The World Health Organization has declared Monkey Pox a global health emergency

There are different kinds of pox — Chicken pox, smallpox, measles, and now monkeypox. It would be interesting to see how much data we have available for different types of pox and train models, making them capable of detecting pox type from images.

Deep Learning Library used: Pytorch

Github Repository: Link

Part 1: Data Collection

Google search revealed a pox data repository maintained by Md Manjurul Ahsan at GitHub. It contains color images of Chickenpox, Measles, Monkeypox, Measles, and Normal. There is also a grayscale version of the same images. Consequently, it has a set of augmented images in gray for all the different class types.

Images of different pox types

We can download the data easily. However, we need to arrange the files so the PyTorch library can use them.
We created a folder called project with a sub-folder called data. We create four empty folders in the data folder to represent the different data classes.

Folders to contain the images

Now we copy the images from Chickenpox_gray and Chickenpox_gray_augmented from the repository into project/data/Chickenpox.

Copy the images from Measles_gray and Measles_gray_augmented from the repository into project/data/Measles.

Copy the images from Monekypox_gray and Monekypox_gray_augmented from the repository into project/data/Monekypox.

Finally, copy the images from Normal_image_gray and Normal_image_gray_augmented from the repository into project/data/Normal.

After transferring the files, the folder structure should look as follows:

Each folder has its collection of grayscale images and grayscale augmented images. Some file names have blank spaces in between

Part 2: Data Transformation

If we check the individual files, we will see that files have dissimilar sizes. However, they need to be uniform to feed them to the network. We will thus define some Pytorch transformations to make the images consistent.

We open a jupyter notebook at the root level of the project.

code.ipynb will contain the model etc

The libraries need to be imported first

Next, we process the data into the train loader and test loader.

Code snippet to load the images from a folder.
Output of above code cell. Shows the mapping between class and label value.

Note that the root folder for the images is data/. data/ contains all the images of the different pox types. So we add extra code to split the data into train and test in the above code snippet. Also, in lines 23 and 24, the shuffle has been set as true so that the images are re-ordered randomly before splitting them into train and test.

Next, we write a piece of code to check if the images are transformed alright.

Display an image from the training set.

The above code takes the 10th image from the 1st batch of training images and displays it. As the data is shuffled in the previous block of code, your output may be different from mine.

A normal skin image

With the data in place, we are ready to start re-training the pre-trained models.

Part 3: Training different models.

We define a generic train function and a generic accuracy check function that can be used to calculate the efficiency of the models.

Generic function to train model on data and test its accuracy

The first line contains a device check to know whether the code is running on CPU or GPU. We set the number of epochs to be 30. Subsequently, there are two functions that can be used to train the model and to test it.

We can also check if there are major imbalances in the data with the following code.

The data is fairly balanced. Could have been better, but it seems okay.

We are now ready to perform the model training. The pre-trained models that we shall train are

  • squeezenet1_0
  • densenet161
  • densenet169
  • alex_net
  • resnet18


The following code snippet trains the squeezenet1_0 model.

The code can be divided into 4 parts. Line 1 sets the location where the trained model would be saved so that it can be used in the future. Lines 4 to 9 initialize the model and other conditions. As the data has 4 classes, it is necessary to change the final layer of the model to have 4 outputs (line 5). Lines 12 to 14 train the model. Finally, a fresh model is initialized in lines 17 to 22, and it is tested in line 25.

Accuracy is as follows

75% accuracy, not good enough


The following code snippet trains the densenet161 model.

The output is as follows

The test accuracy for DenseNet161


Following is the code snippet for training a DenseNet169. It is very similar to the previous code base, with changes in names and the number of neurons in the pre-final layer (line 6 and 20)

The output is as follows:

Accuracy of around 82 %


Next, we try an AlexNet. The following code helps us load and re-train the model.

Increased accuracy to 90 %


The following code snippet trains the resnet18 model.

Training and testing a resnet18 model

4 things are happening in the above code.

In line 1 we are selecting the name and location of the model to be saved. Lines 3 to 12 set up the different parameters of the resnet18 model. We are changing the number of outputs of the model to be 4 which is the number of classes that we have in our dataset. Lines 15 to 17 train the model and save it in the path. At lines 19 to 25, we initialize another resnet18 model and load the weights into it from the path. Then we check its accuracy in line 28.

Accuracy improves to 94.23 %

This brings us to the end of training. In order to make the models useful, we need to write code to apply the model over single images and predict the type of pox that the skin in the image might have.

As resnet18 gives the best accuracy, we will be using the same in future steps.

Part 4: Classifying individual images

We assume that the input image will be either a file or an HTTP link that needs to be processed and passed through the resnet model trained above to ascertain the prevalent pox type. Let us define two functions, one to load the model and another to predict the class given the model and image (which could be grayscale or color).

Create another jupyter notebook called test.ipynb

Decoupling the test

The above code performs the necessary imports, sets up the transformers, and initializes some global variables like classes, PATH, and device.

Next, we define the function to load a model (resnet18). This will be similar to the code that we have already written before. However, we put it inside a function for ease of use.

loading the resnet18 model

Another function is required to apply the model on the passed images. The image could be a file or a url.

The above function takes in an image location and uses the validator package to check if it is a URL or an absolute location (lines 6 to 10). Lines 12 to 15 perform some transformations, storing a single image in a list (as the model expects a batch of images) and duplicating the grayscale channel into three channels as the models can handle color images with three channels. Line 16 is where the code generates the prediction values. We sort the values from highest to lowest and return all the labels corresponding to the order of values. Thus the output of this function is a list of pox types.

For example: [Monkeypox, Chickenpox, Measles, Normal] implies that the highest chance of the skin image is to have Monkeypox, followed by Chickenpox and so on.

Officially this brings us to the end of this article. The code for the entire exercise here can be found at github.

Bonus Part: Front End Application

It would be great if we could share this as an application with the world. We upload an image of the skin, and it tells us the type of Pox in decreasing order of severity. We can do precisely that using Streamlit.

Live Streamlit app performing pox type detection

For a more detailed explanation of coding in Streamlit, take a look at this article, and go to the section Hosting As a Streamlit Application (Locally and then in the cloud). The code for this application is written in a similar format. The GitHub repo for the Streamlit app can be found here. You can try the app here.


If you are stuck till the end, here is wishing you and your near one’s good health. Until next time.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: