# How to teach a machine to identify the numbers in the image using KERAS library

## Here we are going to teach our machine to identify the number in the image

To do this project we need a software like jupyter notebook or google colab to support interactive data science and scientific computing across all programming languages.

For simple data processing, google colab is enough but it is recommended to have jupyter notebook in your system. For jyputer notebook If you don’t have jupyter notebook, install it by watching this video:

So we have our software now. Create a virtual environment and install the required libraries KERAS and TENSORFLOW. Let’s start the project.

From that environment launch jupyter notebook

1. Open a new file in jupyter notebook.

2. If we write a program it should be compatible with its previous versions of python so we import modules from future library. For reproducibility of code, we use random.seed() from NumPy

3. Now import keras and required modules into the code. The datasets are imported from mnist

4. We declare the required variables and assign datasets to our variables where x_train,y_train are train dataset and their labels, x_test, y_test are test dataset, and their labels

num_class=10

Datasets that we imported are numbers from 0 to 9 so it is declared as 10

batch_size=128

we train batch-wise to avoid bottlenecking

epochs are how many times we teach our machine to increase the accuracy but it has a limit

5. The datasets are in 28X28 pixel size, we must reduce the images down into a vector of pixels. So we reshape our data from 28X28 to 784 px . We are changing the datatype of the dataset to float32 so that the dataset fits easily in RAM.

6. The pixel values range from 0 to 255, to normalize the pixel values of grayscale images we rescale them to the range [0,1]

7. Convert the binary vector into a binary class matrix

8. We use a sequential model here, and define 3 layers of nodes -2 dense layers and an output layer (Note that the number of layers can be varied based on getting better accuracy).

9. Now we compile our module with the optimizer SGD(). Optimizers are used to change the parameters of our model to minimize the loss function and make our prediction as accurate as possible.

10. Let’s train the machine with our train datasets.

11. Test the machine with the ground truth value (test dataset). The accuracy can be increased by the epochs to a certain extent.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot