How to Train StyleGAN2-ADA with Custom Dataset

Original Source Here

How to Train StyleGAN2-ADA with Custom Dataset

Learn how to train an AI to generate any images you want

Generated Bike Interpolation [Image by Author]

Have you always wondered how to train your own generative model? The first time I discovered GAN applications like this face generating website, I always wondered how to train GAN on other things. Luckily, I recently had the opportunity to train a bike generating model as part of my research. In this article, I will document my experience on how to train StyleGAN2-ADA on your own images.

StyleGAN is one of the most popular generative models by NVIDIA. Multiple version of StlyeGAN has been released and we will be using the latest version which is StyleGAN2-ADA. To avoid redundancy, I won’t explain StyleGAN as there are many articles that have explained it really well.

Training StyleGAN is computationally expensive. Hence, if you don’t have a decent GPU, you may want to train on the cloud. If you decide to train on Google Colab (it’s free), someone has made a nice notebook for this.

In the tutorial, I will be using the bike dataset BIKED. Feel free to use your own dataset. Just make sure all the training images are square and put them inside the same folder.

In this article, I will be using the Tensorflow implementation of StyleGAN2-ADA. Make sure you use Tensorflow version 1, as the code is not compatible with Tensorflow 2. Alternatively, if you prefer PyTorch, you can use the PyTorch version that has been recently released. The PyTorch code seems to be slightly faster in performance. If you use PyTorch, you can still follow this tutorial with a slight difference in the dataset preparation.


  • 64-bit Python 3.6 or 3.7. Anaconda3 with numpy 1.14.3 or newer is recommended.
  • TensorFlow 1.14 is recommended, but TensorFlow 1.15 is also supported on Linux. TensorFlow 2.x is not supported.
  • On Windows you need to use TensorFlow 1.14, as the standard 1.15 installation does not include necessary C++ headers.
  • 1–8 high-end NVIDIA GPUs with at least 12 GB of GPU memory, NVIDIA drivers, CUDA 10.0 toolkit and cuDNN 7.5


  1. Clone the StyleGAN2-ADA repository and go inside the directory
git clone
cd styelgan2-ada

2. Download or create your own dataset. I will be using BIKED dataset that I already preprocessed. You can download my preprocessed version from dropbox.

Sample Image of BIKED Dataset [CreativeGAN]
# Dowload dataset
wget "" -q -O biked_dataset.tar.gz
# extract dataset
tar -zxvf biked_dataset.tar.gz
# Delete the tar.gz file
rm biked_dataset.tar.gz

After extracting the content, you will have a folder named BIKED that contains 4510 square images of bike designs.

Note: If you are using your own dataset, create a folder and put all training images inside the folder. Make sure all the images are square and the same size.

3. Preparing Dataset

As the code needs the dataset to be in .tfrecords format. We first need to convert our dataset to this format. StyleGAN2-ADA has made a script that makes this conversion easy.

# first argument is output and second arg is path to dataset
python create_from_images ./datasets/biked biked

This will create a multi-resolution .tfrecord file in /datasets/biked/ folder.

4. Training StyleGAN2-ADA

# snap is how often you want to save the model and sample results
# res is what image resolution you want to train on
# augpipe is augmentation pipes, such as 'blit', 'geom', 'color', 'filter', 'noise', 'cutout' or combination of these
python --outdir ./results --snap=10 --data=./datasets/biked --augpipe=bgcfnc --res=512

There are many other arguments that you can modify, feel free to check the code to learn more about the arguments.

Once you run the command, it will start training and periodically save the result and the model file (.pkl) based on the snap arguments that you provided (In this case, every 10kimg). Once you think that the result is good enough or the FID starts to plateau, you can stop training and use the last saved .pkl file.

Once you have the model file you can generate images using this command.

python --outdir=out --trunc=0.5 --seeds=600-605 --network={path_to_pkl_model_file}

You can provide a range or a comma separated value for the seeds. The trunc is the value for the truncation trick. The higher the truncation value the more diverse or extreme the output, but might lower the image quality. The lower the value, the higher the image quality but might be less diverse. The maximum value is 1.

However, if you want to generate interpolation videos or a grid of images. You can refer to my previous article.

5. Transfer Learning or Resume Training

If your training stopped or crashed for some reason. You can still resume training from the last saved progress. You just need to add — resume argument with the path to the model (.pkl) file.

Additionally, you can also use this argument for transfer learning. Instead of training from scratch, it is usually best to start with one of the pre-trained models, even if the dataset itself not similar. Just replace the .pkl path with one of the pre-trained models provided by StyleGAN-ADA.

In this example, I will resume training from my pre-trained model on the biked dataset.

python --outdir ./results --snap=10 --data=./datasets/biked --augpipe=bgcfnc --res=512 --resume=full-bike-network-snapshot-004096.pkl

Training Result

Here is an animation of the training result of 256×256 resolution on the Tesla P100 GPU after training for a day.

StyleGAN2-ADA training progress for 1 day. [Image by Author]


[1] Regenwetter, L., Curry, B., & Ahmed, F. (2021). BIKED: A Dataset and Machine Learning Benchmarks for Data-Driven Bicycle Design.

[2] Nobari, A. H., Rashad, M. F., & Ahmed, F. (2021). CreativeGAN: Editing Generative Adversarial Networks for Creative Design Synthesis

[3] Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., & Aila, T. (2020). Training generative adversarial networks with limited data


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: