Original Source Here
Emotion-based Art Generation using C-GAN
Can AI be creative and understand emotions through art?
With the emergence of Deep Learning-based solutions for image generation and emotion classification, I was wondering if we could bring these two goals together to build a model that takes a simple emotion (positive, negative, and neutral) as input and generates a piece of art that somehow integrates the previously provided emotion.
This project brings philosophical questions in computer science but also in art.
Does an emotion arise only from a visual stimulus or are there other unconscious factors that influence our feelings when we look at a painting? If so, do these visual stimuli generally evoke a common feeling, regardless of the viewer?
Most artists and experts in emotions will refuse these two proposals by answering that each one has his own appreciation, guided by his own life experiences and that the origins of emotions are difficult to decipher.
I will leave these philosophical questions for the introduction and hope that a system as surprising as a neural network can extract general patterns that could lead to basic emotions. For example, dark colors might cause negative reactions while bright/colorful colors might cause positive reactions.
Surprisingly, for the first time, I wanted to take advantage of the hidden biases in datasets and eventually play with them to create Art!
→ Wiki-Art: Visual Art Encyclopedia
Wiki-Art is a large dataset of images of paintings from museums, universities, city halls, and other municipal buildings in over 100 countries. Most of these works are not on public display. Wiki-Art contains painting from 195 different artists. The dataset has almost 100 000 images. This dataset is available in Kaggle and provides various style categories:
'abstract', 'animal-painting', 'cityscape', 'figurative', 'flower-painting', 'genre-painting', 'landscape', 'marina', 'mythological-painting', 'nude-painting-nu', 'portrait', 'religious-painting', 'still-life', 'symbolic-painting'.
Wiki-Art Emotions is composed of 4105 art images annotated with emotions and is built from WikiArt. Each image is annotated with at least one and up to twenty emotion categories.
For simplicity, I merged several emotions to get only 3 final emotions (positive, negative, and neutral) per image. For instance, I merged ‘regret’, ‘fear’ into a negative category. Similarly, I moved ‘optimism’, ‘love’, into the positive category. Finally, for the neutral category, I merged labels like, ‘neutral’, ‘humility’.
I realized that with only 3 categories and 4105 images, my customized version of Wiki-Art Emotion would not be large enough to train a generative adversarial neural network (the model would have suffered from collapse mode).
So I decided to create an image classifier that takes an image as input and classifies it among the 3 emotion categories (positive, negative, and neutral) so that I could use this classifier to label other data in the original WikiArt dataset. I will discuss the architecture of the model I used to build the Image-to-Emotion classifier in the next section.
Overall, I ended up with a Wiki-Art Emotion of 95,808 images labeled with 3 basic emotions: positive (43792 images), negative (33091 images), and neutral (18925 images).
Since I already had the goal of building an Emotion-to-Image generator, I did not want to increase the complexity of the task by allowing to choose the style of the generated fake painting.
Instead, I decided that I would build several Emotion-to-Image generators, one for each of the following styles: Abstract (15000 images), Flower-Painting (1800 images), Landscape (15000 images), and Portrait (15000 images).
Here is the distribution of “emotions” for each of the selected styles:
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot