Original Source Here
Custom Input Loader using Keras Sequence and image augmentation with imgaug
The earlier approach was simple and it gets the job done. However, our data augmentation options are quite limited using keras preprocessing and we do not have too much control about loading our batches. In addition, let’s say that our covid-19 class is short on labels. Our dataset is unbalanced! we could perform upsampling of covid-19 images on training time to balance each batch (I got this idea from COVID-net creators). How can we do all this? let us review it in parts.
We can define a
tf.keras.utils.Sequence. It is great for loading batches of data to feed our network and we can define any operation we want. From the documentation: a sequence must implement the methods
__len__ and optionally
on_epoch_end for modifications in between epochs. We are also implementing
__next__ , to iterate over our generator. We start by the class initialization definition:
We declare our
ImageDatasetclass, child of a keras Sequence and define its parameters: a list of filenames of the dataset, batch_size, augmentation function, whether we want batch balancing or not, image shape and so forth. Then we create three lists, one for each class. If we want to perform upsampling of covid images, we merge normal and pneumonia and leave covid-19 separated. Otherwise we merge everything.
Now we move to defining the function that will tell the number of batches this generator provides to complete one epoch:
Simple enough. Just divide the total number of samples by its batch_size. Furthermore we define how we load the images from our disk:
This code snippet is diligently commented, so hopefully the idea of how batches are loaded is straightforward. Finally we implement the
__next__ method, which calls
__getitem__ with the proper index as argument to load images and
on_epoch_end to shuffle the dataset.
Now there is one more thing we should take into account which is data augmentation. There is a dedicated python library for image augmentation called imgaug. It is well documented and provides many examples to get started. One great thing about it, is that it is not restricted to augmentation for image classification, as we can also augment images with bounding boxes, keypoints and polygons annotations. So, with that said, let’s define our augmentation engine:
This pipeline consist in sequential transformations, but the application is not deterministic. Some functions are applied just 50% of the time because of
sometimes lambda. This
seq object is going to be the argument of the
augmentation parameter of
Now that we have everything set, instantiate an image generator object:
Now we can iterate over it to extract batches of data, one at a time. It can also be passed as an argument to
model.fit(), just as the training snippet on the first method to train our network. Let’s retrieve a sample batch and plot the resulting images along with their labels:
And that is it for the second approach. Again, you can check this codes on my colab notebook.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot