nirmal-25 / Text-to-Image-GAN

Implementation of GAN-based text-to-image models for a comparative study on the CUB and COCO datasets

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Text-to-Image-GAN

GAN-based text-to-image generation models for the CUB and COCO datasets are experimented, and evaluated using the Inception Score (IS) and the Fréchet Inception Distance (FID) to compare output images across different architectures. The models are implemented in PyTorch 1.11.0. Save the datasets in data and follow the steps as given in each folder to replicate our results.

Experimental setup
  • Learning rate: 0.0002 for ManiGAN, Lightweight ManiGAN and 0.0001 for DF-GAN
  • Optimizer: Adam
  • Output image size: 256x256
  • Epochs: 350

Results

Synthesized images

Experimental Results

Our final weight files for trained models

References

[1] Deep Fusion GAN - DF-GAN
[2] Text-Guided Image Manipulation - ManiGAN
[3] Lightweight Architecture for Text-Guided Image Manipulation - Lightweight ManiGAN
[4] PyTorch Implementation for Inception Score (IS) - IS
[5] PyTorch Implementation for Fréchet Inception Distance (FID) - FID

About

Implementation of GAN-based text-to-image models for a comparative study on the CUB and COCO datasets

License:MIT License


Languages

Language:Python 91.7%Language:Jupyter Notebook 8.3%