# Understand Generative Adversarial Network (GAN) in Deep Learning

Original Source Here

# Understand Generative Adversarial Network (GAN) in Deep Learning

## Generating artificial images in the deep convolutional neural network model

In this article, we will discuss Generative Adversarial Networks (GAN) that are deep neural net architecture comprised of two neural networks, competing for one against the other. Here, the GAN is composed of two words, and their meaning is shown below:

Generative means to generate probability distribution that become close to the original data which we want to approximate.

Adversarial means in general, is opposition because there are two models i.e. discriminator and generator, they try to oppose each other to learn the probability distribution function. The GAN are neural networks that are trained in an adversarial manner to generate data that mimics some distribution that we want to approximate.

The classes of the models in machine learning.

1. Discriminate model: It is the one that discriminates between two different classes of data.
2. Generative model: This model is used to generate the image from the random distribution of data to make the distribution D’ that to be close to the real images distribution D.

Mathematically,

z~Z maps to a sample G(Z) ~ D’

In the above picture, the original distribution is a circle D and the data points are represented as X with its distribution D. The working of the generator is to make the random distribution Z to D’ and make it as close as possible to D in term of a metric like l1 or l2 norm. The generator G can be a neural network or deep neural network or convolutional neural network.

Now, after getting the fake sample from the generative model, the discriminator will discriminate of fake sample i.e. G(Z) and real sample then it will give the value ‘0’ if sample coming from D’ and give the value ‘1’ if coming from D. If the discriminator gives the value ‘0.5’ then it will fail to discriminate between the real and fake samples.

What is the Loss function in Generative Adversarial Networks (GAN)?

The important part to notice is that the generator is used to come up with an image that fools the discriminator. The time when the discriminator cannot distinguish between the fake and real image i.e. the fake image generated from the generator is so real to discriminate. If the discriminator knows the image generated from the generator is fake then it shows an error and tries to update the weights and biases of the model in the generator and discriminator.

The formula for loss function is Binary cross-entropy as shown below:

Where y-hat is a reconstructed image and y is an original image.

The actual loss function in terms of GAN is shown below:

How the algorithm works in Generative Adversarial Networks

The basic steps we can do in making the algorithm of GAN is shown below:

Step 1: Import all the libraries

Step 3: Now we can give the training and network parameters

Step 4: We can set the weights and biases variable

Step 5: Now, make the generator and discriminator function.

Step 6: Defining the loss and optimization

• Discriminator loss: It is used to quantifies to differentiate between real and fake images.
• Generator loss: It is used to trick the discriminator that a fake image is a real image.

Step 7: Training the generator and discriminator, save the images of the generator to match with the real images.

Limitation of Generative Adversarial Networks (GAN)

• What is Mode Collapse in GAN?

During the training, the generator may collapse to a setting where it always produces the same output. This is called Mode Collapse.

The derivative of the weights and biases are becoming close to zero.

• Hard to achieve Nash equilibrium

There is no relation between the updates of the cost of the discriminator and generator because the generator updates the cost with no respect to the other model. So, the gradient of both the model cannot give the guarantee of the convergence.

There are many cases where we have blur images where the object in the image seems more because of abruptness in an image then it generates more number of predictions.

Sometimes Generative Adversarial Networks is not capable to understand the front and back view of the image.

Conclusion