Applications of Generative Adversarial Networks (GANs)*GgqvnGyyGdoUl7cc

Original Source Here

Applications of Generative Adversarial Networks (GANs)


Today, computer science is regarded as one of the most influential fields in the world due to the emergence of deep learning. Hundreds of studies have been conducted to develop and optimize various applications, such as GANs. Due to the complexity of the data, GANs fail to perform well in public. However, with the help of deep learning, they can learn quickly and efficiently from massive amounts of data. This is beneficial for healthcare organizations as the amount of data has increased.

A generative adversarial network (GAN) is a sort of architecture used to produce generative models such as deep convolutional neural networks (DCNNs) for picture production. Ian J. Goodfellow officially launched it in 2014 as a robust class of neural networks. The GAN consists of two competing neural networks that compete to study, capture, and copy differences within a dataset[17]. The majority of unsupervised Artificial models are generated via generative models. GANs are among the most simple and practical deep learning solutions.

In order to better comprehend Generative Adversarial Networks, they have been broken down into terms. [21] The first word conjures up images of Generative models, in which data is constantly generated by a network, while the second word conjures up images of Adversarial models, in which networks compete with one another, and the third word conjures up images of Network, which simply refers to a data order that is constantly generating new data. Generic artificial neural networks (GANs) are generative models. Furthermore, generative adversarial networks can generate visuals that have never been seen before. Their newfound knowledge of the world (things, animals, and so on) culminates in the formation of new versions of these pictures that did not exist previously.

Learning how GANs function and how deep convolutional neural network models may be trained in a GAN architecture for picture production can be difficult. For novices, experimenting, constructing and implementing GANs on typical picture datasets used in the area of computer vision, such as the Fashion-MNIST dataset [19], is a good place to start. MNIST has been the subject of so much research that it’s been dubbed the “Hello World” of Machine Learning: whenever a new classification method is developed, people want to see how it performs on MNIST. The Fashion-MNIST dataset, which is being used to better understand the GAN process, is made up of 60,000 tiny square 28*28 pixel grayscale photos of 10 distinct styles of clothes including shoes, t-shirts, dresses, and more [13].

What are Generative Adversarial Networks

Fig 1: Examples of GAN real-world implementation [23]

GANs (Generative Adversarial Networks) are called generative deep learning models. [14] GANs are a type of architecture or model for training generative models in general; deep learning is the most popular type of deep learning employed in this architecture. The notion was initially introduced in a 2014 study titled “Generative Adversarial Networks” by Ian Goodfellow and colleagues. Alec Radford and colleagues produced a study in 2015 called “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks” that encompasses the standardized protocol of DCGANs, which results in more stable models.

GANs are a clever way of training a generative model by framing the problem as a supervised learning problem with two sub-models [14]: the generator model, which we train to generate new examples, and the discriminator model, which tries to classify examples as real (from the domain) or fake (not from the domain) (generated). The two models are trained in an adversarial zero-sum game until the discriminator model is tricked roughly half of the time, indicating that the generator model is producing believable instances.

In the discipline of generative adversarial networks (GANs), which has arisen in recent years, generative models deliver on the promise of their approaches by providing more accurate illustrations in a variety of problem domains, including image-to-image translation tasks such as translating summer to winter or day to night and generating photorealistic images of objects, sceneries, and people that even humans cannot tell are false [14].

Generator Model

In a transient stability tool, the generator model is the most complicated model. This function takes a fixed-length random vector as input and produces a domain sample. A Gaussian distribution is employed to generate the vector, which is then used to seed the generative process. Points in this multidimensional vector space will correspond to points in the issue domain during training, resulting in a compressed representation of the data distribution. This vector space is referred to as a latent space or a random variable that we are unable to witness directly. The latent space is a compressed or high-level idea of observable raw data, such as the distribution of input data. In the case of GANs, the generator model applies meaning to points in a chosen latent space. This means that newly drawn points drawn from the latent space can be provided to the generator model as input and used to generate distinct and different output examples.

Fig 2 : GAN Generator Model [15]

Discriminator Model

The discriminator in a GAN [29] is essentially a classifier. The discriminator tries to differentiate between genuine data and data created by the generator [15]. Its network design may be suitable for the sort of data it is categorizing. Two loss functions are attached to the discriminator. [29] During discriminator training, the discriminator disregards the generator loss and focuses solely on the discriminator loss. The discriminator classes both genuine and bogus data from the generator during discriminator training. When a discriminator misclassifies a genuine instance as fake or a fake instance as real, the discriminator suffers a loss. Backpropagation from the discriminator loss through the discriminator network is used to update the weights of the discriminator.

Fig 3 : GAN Discriminator Model [15]

Are GAN supervised or unsupervised?

Generic modeling algorithms are generated using generative adversarial networks, or GANs, which employ deep learning approaches like convolutional neural networks. In machine learning, generative modeling is an unsupervised learning process that entails autonomously detecting and learning regularities or patterns in incoming data [15]. This is done so that the model may produce or output new instances that might have been drawn from the original dataset.

How Generative Adversarial Network

Fig 4 : GAN for Dummies [24]

While one neural network, known as the generator[15], produces new data instances, another, known as the discriminator, analyzes each one to decide whether or not it belongs to the training dataset. The pictures we’ll make will resemble those in the Fashion-MNIST dataset, which contains real-world photos. When provided with an example from the real MNIST dataset, the discriminator’s purpose is to identify which ones are genuine. In the meantime, the generator is generating new synthetic pictures to send to the discriminator. It believes that by portraying them as real, even if they are counterfeit, they would be accepted as genuine. The generator’s objective is to provide presentable photos, allowing the user to deceive without being discovered. The discriminator’s purpose is to find out images coming from the generator as fake [15].

Generative networks are fed noise, which might be a random distribution, and then are asked to produce bogus data from it. The discriminator [16] receives the bogus data from the generator. Once the generator has been taught, it should be able to produce genuine data from the noise once the training is over. Surprisingly, the generator learns to create meaningful visuals without having to look at them.

The discriminator or adversarial network acts as an opponent for the generator. Its function is to distinguish among two different factors by means of classification or discrimination classes of data. These classes represent the actual data (labeled as 1) and the false data that is produced by the generator (labeled as 0). [16]

Here are the steps a GAN takes:

The generator takes in random numbers and returns an image. This generated image is fed into the discriminator alongside a stream of images taken from the actual, ground-truth dataset. The discriminator takes in both real and fake images and returns probabilities, a number between 0 and 1, with 1 representing a prediction of authenticity and 0 representing fake [15].

This results in a double feedback loop:

(i) We know the ground truth of the images, so the discriminator is in a feedback loop with it. (ii) The discriminator and the generator are linked in a feedback loop [6].

The discriminator network in MNIST is a conventional convolutional channel that can characterize pictures given into it, and it uses a binary classifier to determine if the images are real or fraudulent. [15] The generator, in a sense, is an inverted convolutional network: it samples a vector of random noise into an image, whereas a covariance classifier down-samples an image into a probability. Data is discarded in the first situation because of down-sampling algorithms like max-pooling, whereas it is created in the second scenario.

Both aim to optimize different and opposing objective functions, or loss functions, in zero-sum games. In essence, this is an actor-critic problem. The discriminator changes its behavior along with the generator, and vice versa. Both suffer losses at the same time [15].

Fig 5 : Discriminator functioning [15]

GANs have tremendous potential for both good and evil since they can train autonomously to replicate any distribution of the data [15]. In other words, GANs may be trained to generate worlds that are similar to our own in every area, including visuals, music, speech, and literature. They are, in some ways, robot artists, and their work is stunning — even heartbreaking. They can, however, be used to create bogus media material, which is the technique underpinning Propaganda.

Math involved in GAN modeling

In the generator, the function is minimized, while in the discriminator, it is maximized:

Fig 6: Mathematical representation [19]

The function D() tells us the probability of the given sample originating from a training data set X. The Generator should be minimized as log(1-D(G(z))) i.e. when D(G(z)) is high, then D should assume that G(z) is nothing more than X; 1-D(G(z)) is therefore very low, so it should also be minimized as to make it even lower. With the Discriminator, we aim to maximize D(X) and (1-D(G(z)). In this case, D’s optimal state will be P(x)=0.5. The generator G must be trained so that it will produce results for the discriminator D such that D cannot differentiate between z and X.

The question then becomes: why is this a minimax function? Discriminators seek to maximize their objective which is V while generators strive to minimize it. Consequently, we get the minimax term [21] as a result of minimizing and maximizing. They both learn together by alternating gradient descent. The generator can’t directly affect the log(D(x)) term in the function, so, for the generator, minimizing the loss is equivalent to minimizing log(1 — D(G(z))) [20]. As time goes on, Discriminator and Generator should both improve until Generator becomes better [19].

Training the network

When training a GAN, the most important thing to remember is to never work on both components at the same time [19]. Rather, both the discriminator and the generator are trained independently. We calibrate the discriminator and update the weights correctly in the first phase, and then we retrain the generator while deactivating discriminator training in the second.

Phase 1: This phase of training involves feeding the generator random data (in the form of distribution) to simulate noise. During this step, the generator generates some random images that the discriminator uses to identify patterns. The discriminator learns or evaluates features from its inputs in order to distinguish the real data from the fake data. Through the network, the discriminator outputs some probability and the difference between the predicted results and the actual results are backpropagated, and the weights of the discriminator are updated. In this phase, backpropagation is stopped at the end of the discriminator, and the generator is not updated [19].

Phase 2: During this phase, the generator will produce a large number of images that will be used as input for the discriminator. However, the actual images are not provided at this point. The generator learns by tricking the discriminator into it, outputting false positives. The discriminator outputs probabilities that are then assessed against the actual results and the weights off of the generator are updated through backpropagation. It is important to keep in mind that the discriminator weights should not be updated during backpropagation [19].

Fig 7 : Generator and Discriminator

Benefits of GAN

The need for Generative Adversarial Networks (GANs) has risen dramatically in recent years. The method has been effectively used for high-fidelity natural picture creation, data augmentation jobs, image compression improvements, and other applications [28]. GANs can do just about anything, from emoting super-realistic emotions to discovering new deep space, and from bridging the human-machine empathy gap to inventing new creative forms. GANs produce data that resembles the original data [10]. When you feed GAN an image, it will create a new representation of the image that is similar to the original. It can also produce alternative versions of the text, video, and audio. The GAN approach digs into the details of the data and can be easily interpreted into different formats, making it an ideal tool for machine learning. We can use GANs and machine learning to recognize trees, streets, bicyclists, pedestrians, and parked cars, and also to calculate the distance between objects[10].

GANs: Limitations and Challenges

Results from text or speech are very complex to generate. You must continuously provide different types of data to check if the model is accurate. Usually, Generative adversarial networks do not come with an objective function that can be used to provide information regarding the training progress. In the absence of good evaluation metrics, it is as if you were working in the dark. The performance of multiple models cannot be compared using an indicator that tells when to stop. [22] Currently, GANs produce the sharpest images. Both the networks in GAN can be trained using backpropagation alone, using adversarial training. Since you have to train two networks from one backpropagation, GANs are more unstable to train. Therefore, selecting the right objectives is crucial [10].

Applications of GAN :

1. Generate Examples for Image Datasets

This is also the demonstration used to demonstrate how to train stable GANs at scale in the paper by Radford, et al. called DCGAN published in 2015 titled “ Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks “. The authors demonstrated models for generating new bedroom examples.

Fig 8 : Examples of Image dataset created by GAN

2. Generate Cartoon Characters

Yanghua Jin, et al. demonstrate how to train and use a GAN to generate faces for anime characters (i.e. Japanese comic book characters) in their 2017 paper titled (“Towards the Automatic Anime Characters Creation with Generative Adversarial Networks”)[27].

Fig 9: Auto-anime characters created by GAN

3. Semantic image-to-process translation

The 2017 paper “ High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs “, by Ting-Chun Wang et al., shows the use of conditional GANs to produce photorealistic images when given a semantic image or sketch as input [18].

Fig 10: High-Resolution image and semantic manipulation with GAN

Photograph of a cityscape generated from Semantic Image Analysis and GANs. Taken from High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs, 2017.

4. Fake face generator [30] generates endless fake faces using artificial intelligence.

Fig 11 : Examples of fake face generation by GAN [18]

5. Stock Market Prediction Based on Generative Adversarial Network

Kang Zhang, et al. demonstrate the novel architecture of the Generative Adversarial Network (GAN) by using the Multi-Layer Perceptron (MLP) as the discriminator and the Long Short-Term Memory (LSTM) as the generator to forecast the closing price of stocks in their 2018 paper titled “Stock Market Prediction Based on Generative Adversarial Network” [26].

Fig 12 : Illusion of price prediction by our GAN and some compared models on PAICC

Different types of GANs

Research on GANs has become very active and there has been a wide variety of GAN implementations [17]. Below are a few important ones that are currently in use:

  1. Vanilla GAN: This is the most basic type of GAN. Both the Generator and the Discriminator in this case are simple multilayer perceptrons. The vanilla GAN algorithm is very simple, it uses stochastic gradient descent to optimize the mathematical equation.
  2. Conditional GAN (CGAN): This deep learning method is characterized by the presence of some conditional parameters. For the corresponding data to be generated in CGAN, ‘y’ is added as an additional parameter to the generator. Additionally, labels are inserted into the input to the Discriminator so that it can distinguish between the real data and the fake generated data.
  3. Deep Convolutional GAN (DCGAN): As one of the most popular and successful implementations of GAN, DCGAN has achieved great success. ConvNets replace multilayer perceptron in the model. We use convolutional strides instead of max pooling in the ConvNets. Some layers are not fully connected.
  4. Laplacian Pyramid GAN (LAPGAN): Approximately, the Laplacian pyramid can be thought of as a linear invertible representation composed of a set of band-pass images, spaced one octave apart, plus a low-frequency residual. Multiplication of Generator and Discriminator networks is used in this approach, as well as different levels of the Laplacian Pyramid. Images produced by this methodology are extremely high quality. After first downsampling the image at every layer, it is re-scaled again at each layer in a backward pass, where some noise is added to the image using conditional GANs at these layers so it reaches its original size.
  5. Super Resolution GAN (SRGAN): As the name suggests, SRGANs use deep neural networks along with adversarial networks to produce higher resolution images [17]. GANs of this type are particularly useful for up-scaling low-resolution images optimally, minimizing errors while doing so.


The GAN has become a very popular and widespread technique across various industries for solving a wide variety of problems. It may seem easier to train them, but it doesn’t work that way because they require two networks to do so, which makes them unstable. Whether they are used for good or evil, GANs are capable of mimicking any distribution of information. It is often taught to make GANs create worlds that almost mirror our own in any domain: images, music, speech, or prose. They’re robot artists in a sense, and their output is impressive — poignant even. But they will even want to generate fake media content and are the technology underpinning Deep Fakes. We had discussed use cases and the implementation of GAN on this blog. The more accurate and advanced GANs become, the greater the value they can provide to businesses. Nevertheless, theory without practice is of no use. Therefore, we performed the training of the neural network on the most familiar dataset Fashion MNIST.

The employment of GAN in the industry is a great way to address the requirements of this generation. As said previously, a GAN can detect deep false videos, recognize fake and real pictures, and produce new images. More study might lead to contemporaneous risk assessment, which could be used in the future. The ideas offered in this blog give a quick overview of GAN in the business and how they might be put to good use.

We noticed that there are a number of applications of GANs that are regularly published in research publications. We expect that a number of further publications will follow in the near future. The above applications may inspire you to generate your own GAN — perhaps you can come up with your own!

As a final note, I would like to share a few additional resources with you in case you want to try things on your own, or just better understand how GANs work in general.


[1] Generative Adversarial Network (GAN) research paper by

[2] Top 5 Interesting Applications of GANs for Every Machine Learning Enthusiast! JalFaizy Shaikh — April 8,i2019

[3] Andrew Ng’s Stanford notes

[4] Introduction to Generative Adversarial Networks (GANs)

Aditya Sharma JUNE 28, 2021

[5] Deep Learning CNN for Fashion-MNIST Clothing Classification

by Jason Brownlee on May 10, 2019, in Deep Learning for Computer Vision ilassification/#:~:text=The%20Fashion%2DMNIST%20dataset%20is,shirts%2C%20dresses%2C%20a ind%20more.

[6] Generative Algorithms in DL by Darshan Dilipbhai Patel

[7] On Discriminative vs. Generative iclassifiers: A comparison of logistic regression and Naive Bayes, by Andrew Ng and Michael I. Jordan [8] The Math Behind Generative Adversarial Networks

[9] A Beginner’s Guide to Generative Adversarial Networks (GANs) By Data Science

[10] Advantages and disadvantages of generative adversarial networks (GAN) by Junaid Rehman

[12] List of Papers published on GANs

[13] Deep Learning CNN for Fashion-MNIST Clothing Classification

by Jason Brownlee on May 10, 2019, in Deep Learning for Computer Vision ilassification/

[14] A Gentle Introduction to Generative Adversarial Networks (GANs)

by Jason Brownlee on June 17, 2019, in Generative Adversarial Networks

[15] A Beginner’s Guide to Generative Adversarial Networks (GANs):

[16] Generative Adversarial Networks[The discriminator]

[17] Generative Adversarial Networks, 15 Jan, 2019

[18] 18 Impressive Applications of Generative Adversarial Networks (GANs) by Jason Brownlee on June 14, 2019, in Generative Adversarial Networks

[19] Introduction to GANs on Fashion MNIST Dataset

[20] Mayank Vadsola demonstrates Jan 1, 2020

The math behind GANs (Generative Adversarial Networks)

[21] GANs — A Brief Introduction to Generative Adversarial Networks by Shweta Goyal Jun 2, 2019 i6216c7200e

[22] Improved techniques for training GANs

[23] GAN real world implementations

[24] GAN for Dummies








Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: