Can Data Augmentation really scale up the performance??

https://miro.medium.com/v2/resize:fit:1200/0*6tjCIpqBYuXDGcgV

Original Source Here

Can Data Augmentation really scale up the performance??

Unleashing the Power of Data Augmentation

Photo by Joshua Sortino on Unsplash

20 years ago, there isn’t much data available to use for prediction/ analysis, and also the computational resources to support the analysis were also limited. The amount of data exponentially grew during the Internet Age, this is when Artificial Intelligence became popular because it was satisfying two requirements for usage:-

  • Data
  • Computational Resources

Okay, both data and resources are available now then What is the issue? There are many applications in which the nature of the data is either small or restrictive.

Picture this — Company A is on a mission to create an Artificial Intelligence model that can detect brain tumors. They’ve partnered with a hospital to get their hands on brain images of patients, but there’s a catch. The data sent by the hospital is as scarce, making it difficult for Company A to train its model. And even when they do manage to train it, the performance is not satisfactory.

It is a conventional case of quantity vs quality, the model finds it harder to learn the patterns from a limited amount of data. It’s like a student trying to get good grades with a handful of quality materials.

Company A does not need to worry because they could scale up the original data by using Data Augmentation techniques, don’t you find it interesting !!!!.

What is Data Augmentation?

As the name suggests, we augment the data. In turn, this implies that we produce new data from the existing one by applying many transformation techniques. It could be applied to any dataset:-

Tabular Data — Add new rows with slight modifications, and change values in a specific feature.

Image Data — flipping, rotating, brightness, saturation.

Text Data — Synonym replacement, Word addition/deletion.

Practical Implementation With Tensorflow

Before diving into the code and analysis, I would like to remind you

Manual Augmentation can be done , but it is very expensive.

The dataset is downloaded from Kaggle and it comprises dogs and cats (which many people love, me too). With only 250 images for each class, this dataset is relatively small, making it an ideal candidate for observing the results of data augmentation.

Preview of the dataset before Augmentation:

Dogs and Cats before data augmentation(Image by the Author)

After Augmentation:

Dogs and Cats after data augmentation(Image by the Author)

These dogs and cats are so cute before augmentation. After augmentation also the dogs and cats are cute but as the images are rotated, translated, and dragged they kind of look pixelated.

Code

From the TensorFlow library, the ImageDataGenerator class is used to generate various kinds of transformations, these transformations include rotation (rotation_range), flipping (horizontal_flip), zooming (zoom_range), and distortion/slanting (shear_range).

from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img

datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')

Access all the filenames in the dog and cat dataset using the global library

from glob import glob

# The path directory for the dog images
filename_dogs_list = glob("dogs_cat/train/dogs/*.jpg")
# The path directory for the cat images
filename_cats_list = glob("dogs_cat/train/cats/*.jpg")

For each image in the dog dataset, this code below is going to create 20 more images through the above transformations.

The number 20 could be increased still further by changing the “20” number in the code. Be careful with the size of the training dataset as the quality might decrease upon further increase.

for i in range(len(filename_dogs_list)):

img = load_img(filename_dogs_list[i]) # this is a PIL image
x = img_to_array(img) # this is a Numpy array with shape (3, 150, 150)
x = x.reshape((1,) + x.shape) # this is a Numpy array with shape (1, 3, 150, 150)

j = 0
for batch in datagen.flow(x, batch_size=1,
save_to_dir='/Users/saiviswanth/Desktop/eegrnn/dogs_cat/train/aug_dog', save_prefix='dog', save_format='jpeg'):
j += 1
if j > 20:
break # otherwise the generator would loop indefinitely

Model training

Used a basic Convolutional Neural Network for the classification of dogs and cats. The input image shape passed to the network is of resolution (150,150,3) and the network consists of only 2 convolutional layers with 1 flattened layer and 1 Dense layer. The model architecture is used for both augmented and original datasets.


from tensorflow.keras import datasets, layers, models

model = models.Sequential()
## First Convolution layer
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
## Second Convolution layer
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
## Flatten it to make it one-dimensional
model.add(layers.Flatten())
model.add(layers.Dense(1 , activation="sigmoid"))

# Compiling the model
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])

# Training the model
history = model.fit(train_generator, epochs=10,
validation_data=validation_generator)

Results

Image by the Author

The training accuracy seems to be at par after 10 epochs reaching at around 90%, whereas the validation accuracy for the Original and Augmentation datasets are around 60% and 68% respectively. This shows that the model trained on the augmented dataset generalizes well compared to the model trained on the Original dataset.

The augmentation worked very well for this dataset, improving validation accuracy by almost 10%.

Conclusion

It’s not always guaranteed that the performance of the model increases when applying data augmentation, it depends on many factors like complexity, size, model architecture, image resolution, and so on.

There is a high probability of increment in performance when applied to a small dataset. When I applied data augmentation on the medium-larger datasets, the performance achieved after applying data augmentation is the same or even worse compared to the original dataset performance. Therefore, it’s important to carefully evaluate the impact of data augmentation on these kinds of datasets before implementing it.

References

The dataset is taken from Kaggle and the link is provided below https://www.kaggle.com/datasets/samuelcortinhas/cats-and-dogs-image-classification

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: