https://miro.medium.com/max/460/0*PcnD9X-DS9efE7we

Original Source Here

# Start Deep Learning with Keras

**What is Deep Learning?**

Guess what is the best learning tool in the world…?

Some of you might have guessed it right. It’s your brain. The human brain is the most powerful learning tool in the world.

Deep Learning is the branch of Machine Learning where we try to mimic the learning process of the human brain. In Deep Learning we try the machine to adapt the learning process, how the human brain learns.

**Neuron and Neural Network**

The neuron is the building block of the working of the human brain unit. The human brain consists of billions of interconnected neurons which are responsible for transmitting information in the brain and this network of neurons is called a neural network. This is how a biological neuron looks like:

And this is how the basic Artificial Neural Network with only one neuron(can be called node also) looks like:

Here the inputs are provided to the neuron from the input layer and the neuron is giving output after applying the **Activation Function** (will be discussed further). The input layer can be considered as the sensing organs of the human body and the output is the conclusive information gained from the sensing organs.

Now let’s discuss what happens in the neural network (in the image above, the single neuron can be considered as the whole neural network.)

**Step 1:- Giving input from the input layer.**

In the input layer, there are several input nodes which are actually the value of our features in a single row. If a training dataset has n rows, then there will be n input nodes in the input layer, one for each of the features.

**Step 2:- Adjusting weights of input nodes.**

Weights are a very crucial entity in the neural network. Before sending the input values to the neural network each of the feature values is multiplied by its weight. The value of the weight can vary from zero to one and it can be different for each input node.

Weight is the entity that defines the importance of an input feature. Higher weight indicates the feature is really important for the output prediction and less value of the weight indicates that the feature is not important.

For example, if you want to predict that a person is sick or not, and you have two features: the color of the person and body temperature. Here weight for the color will be very less or near to zero because the color doesn’t tell us if a person is sick or not. On the other hand, the weight for the body temperature will be almost one or very high because body temperature is a really important feature to tell if a person is sick.

The weights are randomly initialized before training the neural network, and the neural network adjusts the weights for all the features while training. After training the neural network, the weights are fixed and output will be predicted according to these weights.

**Step 3 : What happens inside the neuron?**

The neuron simply sums up the weighted inputs and sends this value to apply the activation function on it. If ** x** is the input,

**is the weight, and**

*w***is the number of input nodes, so the neuron will output this value before applying the activation function :**

*n**x1*w1 + x2*w2 + … + xn*wn*

**Step 4: Applying the Activation Function on the weighted sum of input values.**

The activation function is the function that decides what kind of output the neuron needs to send. Various kind of activation functions are discussed below:

This function is used when the output is in binary form. It gives 0 when the value is less than 0 and gives 1 if the input is greater than or equal to 0.

The output value of this function varies from zero to one. It is useful when our output value is continuous (like probability).

This function is used for continuous values. It gives 0 when input is zero and gives the same value if greater than zero.

This function is similar to the sigmoid function. Its output value is continuous and varies between -1 to 1.

The output node is responsible for giving us the output of our neural network. A choice of activation functions we have to apply on the output node according to the type of output value.

**Note:- **If our output is in multiclass form, the number of output nodes would also be multiple. Ie; our output will be in dummy variable form.

# How does a Neural Network learn and Work?

Learning of a Neural Network is the most complex part of Deep Learning. An artificial Neural Network looks something like this :

Let’s assume this is an already trained neural network and now first we’ll understand how it works, step-by-step.

- First, we give the input features to the input layer.
- Then, the weighted inputs are passed to the hidden layers.
- In the hidden layer, each node processes the value according to the activation function and passes the values to the output layer.
- The output layer will give the output.

This is how a trained neural network predicts the values. Now we’ll look into the learning process of neural networks.

# How does an Artificial Neural Network learn?

Here we have the most basic form of Artificial Neural Network, here we have only one neuron as the hidden layer. This whole structure is also called a Perceptron.

Here **y^** is the output value given by the neural network and **y** is the original value. Now let’s see step by step how this neural network learns.

- First, we give the inputs to the input layer for one instance and the network produces the output y^ according to the weights and activation function.
- Now we have a cost function C, which indicates the error between output value
**y^**and original value**y**. Neural Network tries to minimize this cost function while learning. - After calculating the error, we send this information back to the neural network and it adjusts the weights to minimize the cost function for the next instance. This process is called
.*Back Propagation* - Now the same process is repeated with the next instance (next row from our dataset) and the weights are again adjusted.
- When the neural network is gone through the whole dataset, it is called an
.*epoch* - Neural Network performs multiple epochs and adjusts the weights. We can also define the batch size.
is the number of instances after which the neural network updates the weights or backpropagation is performed.*Batch Size*

And this way our Neural Network is trained.

# Building an Artificial Neural Network in Python

Now we’ll walk through the steps of making an ANN in Python with Keras library. We are going to train a public Accident Severity data from Kaggle. This dummy dataset was uploaded by the UK Govt. and is publicly available on Kaggle.com.

Here ** Accident_Severity** is the dependent variable. We’ll mostly focus on creating the neural network and assuming that you are already aware of data preprocessing so we are skipping this part.

Here is the list of all the columns of the dataset.

Before going into the Neural Network, the following necessary data preprocessing methods are applied to the dataset:

- Removing the useless columns for training.
- Label Encoding the categorical variables.
- Splitting training and test set.
- Apply feature scaling on independent variables (X).

Now importing the libraries for neural networks.

Initializing the ANN and adding input layer and two hidden layers and finally adding the output layer.

Now finally training our neural network :

And our Neural Network classifier is now ready to make predictions on new data.

So we learned what deep learning is and how to build an artificial neural network.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot