Detect Covid-19 from Chest X-Ray Images using pre-trained networks

Original Source Here

Now let’s work on detecting Covid-19 from Chest X-Ray Images using pretrained networks.

We are going to use a pretrained Resnet-50 model found in ( via a transfer learning process to develop a binary classification model that can distinguish between Covid-19 and Normal Chest X rays. For this step, we have 100 Covid-19 and 100 normal chest x-rays.

  1. Importing necessary packages
import os     # to implement os related functionality
import tensorflow as tf # library useful for training deep neural nets
from tensorflow.keras import layers, Model
from sklearn.model_selection import train_test_split
from tensorflow.keras.applications import ResNet50 # import resnet 50
from tensorflow.keras.models import Sequential
# importing keras layers
from tensorflow.keras.layers import Dense, Flatten, GlobalAveragePooling2D, Dropout
# Preprocesses a tensor or Numpy array encoding a batch of images.
from tensorflow.keras.applications.resnet50 import preprocess_input
# Generate batches of tensor image data with real-time data augmentation.
from tensorflow.keras.preprocessing.image import ImageDataGenerator

2. Learning_with_given_data():

2.1. We have called a learning_function() providing a base directory path as an argument to it. This method will be responsible for all the computations.

3. Transfer Learning

3.1. The logic behind transfer learning is simple, we take a model which is pretrained with a bigger dataset ( in our case ResNet50 ) and transfer its knowledge into a smaller dataset.

Figure 6. The idea behind transfer learning [12]

3.2. We are going to use ResNet50 with imagenet weights. The reason behind the development of ResNet 50 is to avoid depreciating accuracy when the model went on deeper. It also solves the vanishing gradient problem [11].

4. Using the base model (ResNet50) from TensorFlow.Keras.application.

base_model = ResNet50(input_shape=(224, 224,3), include_top=False, pooling='max', weights="imagenet")

for layer in base_model.layers:
layer.trainable = False

4.1. Include_top: This flag is used to specify whether to include the fully connected layer at the top of the network or not.

4.2. Input_shape: we need to specify input shape if we specify include_top as False.

4.3. Pooling: It is an optional field used for feature extraction when include_top is False. Pooling = “max” denotes global max pooling will be applied to match the dimensions.

4.4. Weights: stating to use “imagenet” pretrained model.

4.5. Here we are planning to freeze the whole pretrained model, remove the last fully connected layer and add two new trainable layers. The logic behind doing this is the convolutional layers tend to extract generic, low-level features that are common across the images for example edges, pattern, gradients but the end layers identify specific features in our case an image such as in the chest x-ray, loss of the normal black appearance and appearance of white grass-like pattern symbolizing the presence of pneumonia or covid-19 [12][13].

5. Let us create a sequential model and add our resnet base model.

# Create model
model = Sequential()

model.add(Dense(256, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

# Compile model
model.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy'])

5.1. We have added two dense layers with the second layer has two as value as we have two labels, with a sigmoid activation layer as we have two classes, so it will be perfect for binary classification with a logistic regression model.

5.2. Let us have a look at the model summary.

5.3. ImageDataGenerator Generates batches of tensor image data with real-time data augmentation.

5.3.1. Preprocessing_function: it gets applied on each image input once it is done with augmentation. It

takes in one tensor with rank 3 as input and gives the same shape output tensor of Numpy.

image_size = 224
# Generates batches of tensor image data with real-time data augmentation.
data_generator = ImageDataGenerator(preprocessing_function=preprocess_input,
validation_split=0.2) # set validation split 20% which we will use as test data

5.3.2. Horizontal_flip: It is an image transform operation.

5.3.3. Width_shift_range: It shifts the image horizontally. We have provided the range of shifting to 0.2 which will shift the image left or right by 20% in a random manner ( +20% to -20% ) by its resolution.

5.3.4. height_shift_range: It shifts the image vertically. We have provided the range of shifting to 0.2 which will shift the image up or down by 20% in a random manner ( +20% to -20% ) by its resolution.

5.3.5. Validation_split: It is the fraction of the image dataset which is kept for validation ( in our case for testing). [14]

train_generator = data_generator.flow_from_directory(
target_size=(image_size, image_size),

test_generator = data_generator.flow_from_directory(
target_size=(image_size, image_size),

5.4. Train_generator and test_generator(): we are using the flow_from_directory() method which takes the dataset path and generates groups of augmented data.

5.4.1. Data_dir: path to the dataset

5.4.2. Target_size: height*width dimensions of target image

5.4.3. Batch_size: the size of batches of image files by default its value is 32.

5.4.4. Class_mode: “binary” will denote 1D binary labels. 5.4.5. Subset: a subset of data [14]

5.5. We are going to early stop and save the best model in the directory that can be used as our model for prediction later.

# Early stopping & checkpointing the best model in ../current directory
# restoring that as our model for prediction
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint

early_stopper = EarlyStopping(monitor = 'val_loss', patience = EARLY_STOP_PATIENCE)
checkpointer = ModelCheckpoint(filepath = 'best.hdf5',
monitor = 'val_loss',
save_best_only = True,
mode = 'auto')

5.5.1. EarlyStopping(): It will halt the training when the metric has stopped improving. We have kept the stop patience at 5.

5.5.2. ModelCheckpoint(): It will save the model weights at some location.

import math
fit_history =,
callbacks=[checkpointer, early_stopper])

5.6. will train the model for a fixed number of epochs.

5.6.1. Steps_per_epoch: It takes in integer or None value. None by default means that it will automatically calculate the number of samples in data by the batch size.

5.6.2. Validation_data: On this data model metrics like loss, accuracy will be computed for each epoch.

5.6.3. Validation_steps: None will rerun until the test/validation data is fully used.

5.6.4. Callbacks: list of callbacks to apply during training of the model.[15]

5.7. Here is the model output

As we can see the Model is giving very good results improving with every epoch with decreasing loss. We have achieved an accuracy of 92.50%.

Now lets test the same code with 50 randomly selected Covid-19 real images + 50 generated images from above code keeping normal image dataset same.

import shutil, random, os
base_path = '/content/drive/MyDrive/Colab Notebooks/Data Mining and Visualization/Assessment_1_task_2/with_synthetic_data/'
covid_dirpath = base_path + 'Covid-19' # original 100 Covid-19 images
synthetic_images = base_path + 'images-covid' # 50 generated images
dest_directory = base_path + 'test2/covid' # destination where combined images will be kept

filenames = random.sample(os.listdir(covid_dirpath), 50)

for fname in filenames:
shutil.copy(os.path.join(covid_dirpath, fname), dest_directory)

f_names = os.listdir(synthetic_images)
for f in f_names:
shutil.copy(os.path.join(synthetic_images, f), dest_directory)


1. Import packages

1.1. shutil: to do operations like copy, move, etc.

1.2. random: to generate random numbers

1.3. os: to achieve os-related functionality like getting paths, creating directories, etc.

2. we are choosing randomly 50 images from the provided covid-19 dataset using random.sample() method.

3. Then we are copying these files into test2/covid folder. This folder already contains normal 100 chest x-ray images.

3.1. Test2

– Covid (50 provided covid images + 50 generated covid chest x-ray images)

– Normal (100 normal chest xray)

4. Now we are going to run learning _with_synthetic_data(): function which just has a learning function with parameter as the path of the test2 folder and got the below output:

As we can see the Model is delivering very good results improving with every epoch with decreasing loss. We have achieved an accuracy of 95.00% on the test data.


1. We successfully generated synthetic images from the Conditional Generative Adversarial Networks.

2. Implemented the concepts of transfer learning to train our model.

3. Learned about setting up different hyperparameters and how they affect the computation and performance of the model.

4. It was very interesting to implement a real-world example of GANs and see their potential applications.


[1] Wikipedia, Ian Goodfellow, viewed on 17 March 2021, <;

[2] viewed on 19 March 2021, < >

[3] Joseph Rocca, 7 Jan 2019, Understanding GANs, viewed on 27 March 2021 <;

[4] Ian J. Goodfellow, Jean Pouget-Abadie, 10 June 2014, Title: Generative Adversarial Nets, University of Montreal

[5] Mehdi Mirza, Simon Osindero, 6 Nov 2014, Title: Conditional Generative Adversarial Nets, University of Montreal

[6] Raúl Gómez, 23 May 2018, viewed on 26 March 2021, <;

[7] Francois Chollet, 12 April 2020, The Sequential model, < >

[8] Jason Brownlee, 18 Jan 2021, How to Choose an Activation Function for Deep Learning < >

[9] < >

[10] < >

[11] Purva Huilgol, 18 Aug 2020, AnalyticsVidya, viewed on 13 March 2021, Top 4 Pre-Trained Models for Image Classification with Python Code < >

[12] Will Koehrsen, 26 Nov 2018, Transfer Learning with Convolutional Neural Networks in PyTorch, viewed on 22 March 2021 <;

[13] BMJ 2020;370:m2426, (Published 16 July 2020), The role of chest radiography in confirming covid-19 pneumonia <;

[14], n.d., TensorFlow documentation, <;

[15], n.d., Title: tf.keras.Model, TensorFlow documentation, <;

[16] Jeremy Jordan, 1 March 2028, Setting the learning rate of your neural network, viewed on 1 April 2021,



Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: