IIGROUP / TediGAN

[CVPR 2021] Pytorch implementation for TediGAN: Text-Guided Diverse Face Image Generation and Manipulation

Home Page:https://arxiv.org/abs/2012.03308

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TediGAN

Paper License: MIT Python Open In Colab

Implementation for the paper W. Xia, Y. Yang, J.-H. Xue, and B. Wu. TediGAN: Text-Guided Diverse Face Image Generation and Manipulation and Towards Open-World Text-Guided Face Image Generation and Manipulation in PyTorch.

Contact: weihaox AT outlook dot com

Update

[2021/8/28] add an online demo implemented by @bfirsh. This demo uses an open source tool called Cog.

[2021/4/20] add extended paper.

[2021/3/12] add support for high-resolution and multi-modality.

[2021/2/20] add Colab Demo for image editing using StyleGAN and CLIP.

[2021/2/16] add codes for image editing using StyleGAN and CLIP.

TediGAN Framework

We have proposed a novel method (abbreviated as TediGAN) for image synthesis using textual descriptions, which unifies two different tasks (text-guided image generation and manipulation) into the same framework and achieves high accessibility, diversity, controllability, and accurateness for facial image generation and manipulation. Through the proposed multi-modal GAN inversion and large-scale multi-modal dataset, our method can effectively synthesize images with unprecedented quality.

Train the StyleGAN Generator

We use the training scripts from genforce. You should prepare the required dataset to train StyleGAN generator (FFHQ for faces or LSUN Bird for birds).

  • Train on FFHQ dataset: GPUS=8 CONFIG=configs/stylegan_ffhq256.py WORK_DIR=work_dirs/stylegan_ffhq256_train ./scripts/dist_train.sh ${GPUS} ${CONFIG} ${WORK_DIR}

  • Train on LSUN Bird dataset: GPUS=8 CONFIG=configs/stylegan_lsun_bird256.py WORK_DIR=work_dirs/stylegan_lsun_bird256_train ./scripts/dist_train.sh ${GPUS} ${CONFIG} ${WORK_DIR}

Or you can directly use a pretrained StyleGAN generator for ffhq_face_1024, ffhq_face_256, cub_bird_256, or lsun_bird_256.

Invert the StyleGAN Generator

This step is to find the matching latent codes of given images in the latent space of a pretrained GAN model, e.g. StyleGAN, StyleGAN2, StyleGAN2Ada (should be the same model in the former step). We will include have included the inverted codes in our Multi-Modal-CelebA-HQ Dataset, which are inverted using idinvert.

Our original method is based on idinvert (including StyleGAN training and GAN inversion). To generate 1024 resolution images and show the scalability of our framework, we also learn the visual-linguistic similarity based on pSp.

Due to the scalability of our framework, there are two general ways that can be used to invert a pretrained StyleGAN.

Train the Text Encoder

This step is to learn visual-linguistic similarity, which aims to learn the text-image matching by mapping the image and text into a common embedding space. Compared with the previous methods, the main difference is that they learn the text-image relations by training from scratch on the paired texts and images, while ours forces the text embedding to match an already existing latent space learned from only images.

Using a Pretrained Text Encoder

We can also use some powerful pretrained language models, e.g., CLIP, to replace the visual-linguistic learning module. CLIP (Contrastive Language-Image Pre-Training) is a recent a neural network trained on 400 million image and text pairs.

In this case, we have the pretrained image model StyleGAN (or StyleGAN2, StyleGAN2Ada) and the pretrained text encoder CLIP. The inversion process is still necessary. Given the obtained inverted codes of a given image, the desired manipulation or generation result can be simply obtained using the instance-level optimization with an additional CLIP term.

The first step is to install CLIP by running the following commands:

pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git

The pretrained model will be downloaded automatically from the OpenAI website (RN50 or ViT-B/32).

The manipulated or generated results can be obtained by simply running:

python invert.py --mode='man'               # 'man' for manipulation, 'gen' for generation
	--image_path='examples/142.jpg' # path of the input image
	--description='he is old'       # a textual description, e.g., he is old.
	--loss_weight_clip='1.0'        # weight for the CLIP loss.
	--num_iterations=200            # number of optimization iterations

or you can try the online demo:

streamlit run streamlit_app.py

The diverse and high-resolution results from sketch or label can be obtained by running:

cd ext/
python inference.py 
	--exp_dir=experiment                                 # path of logs and results
	--checkpoint_path=pretrained_models/{model_name}.pt  # path of pretrained models
	--data_path=experiment/images/{dir}                  # path of input images
python demo.py --description='he is old' 
	--mode='man' --f_oom=False                           # set as True if OOM error.
	--step=500   --loss_clip_weight=200                                         

The pretrained models can be downloaded here.

Text-to-image Benchmark

Datasets

  • Multi-Modal-CelebA-HQ Dataset [Link]
  • CUB Bird Dataset [Link]
  • COCO Dataset [Link]

Publications

Below is a curated list of related publications with codes (The full list can be found here).

Text-to-image Generation

  • [DALL-E] Zero-Shot Text-to-Image Generation (2021) [paper] [code] [dVAE] [blog]
  • [DF-GAN] Deep Fusion Generative Adversarial Networks for Text-to-Image Synthesis (2020) [paper] [code]
  • [ControlGAN] Controllable Text-to-Image Generation (NeurIPS 2019) [paper] [code]
  • [DM-GAN] Dynamic Memory Generative Adversarial Networks for Text-to-Image Synthesis (CVPR 2019) [paper] [code]
  • [MirrorGAN] Learning Text-to-image Generation by Redescription (CVPR 2019) [paper] [code]
  • [Obj-GAN] Object-driven Text-to-Image Synthesis via Adversarial Training (CVPR 2019) [paper] [code]
  • [SD-GAN] Semantics Disentangling for Text-to-Image Generation (CVPR 2019) [paper] [code]
  • [HD-GAN] Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network (CVPR 2018) [paper] [code]
  • [AttnGAN] Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks (CVPR 2018) [paper] [code]
  • [StackGAN++] Realistic Image Synthesis with Stacked Generative Adversarial Networks (TPAMI 2018) [paper] [code]
  • [StackGAN] Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks (ICCV 2017) [paper] [code]
  • [GAN-INT-CLS] Generative Adversarial Text to Image Synthesis (ICML 2016) [paper] [code]

Text-guided Image Manipulation

  • [ManiGAN] ManiGAN: Text-Guided Image Manipulation (CVPR 2020) [paper] [code]
  • [Lightweight-Manipulation] Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation (NeurIPS 2020) [paper] [code]
  • [SISGAN] Semantic Image Synthesis via Adversarial Learning (ICCV 2017) [paper] [code]
  • [TAGAN] Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language (NeurIPS 2018) [paper] [code]

Metrics

Acknowledgments

The GAN inversion codes borrow heavily from idinvert and pSp. The StyleGAN implementation is from genforce and StyleGAN2 from Kim Seonghyeon.

Citation

If you find our work, code, or the benchmark helpful for your research, please consider to cite:

@inproceedings{xia2021tedigan,
  title={TediGAN: Text-Guided Diverse Face Image Generation and Manipulation},
  author={Xia, Weihao and Yang, Yujiu and Xue, Jing-Hao and Wu, Baoyuan},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021}
}

@article{xia2021open,
  title={Towards Open-World Text-Guided Face Image Generation and Manipulation},
  author={Xia, Weihao and Yang, Yujiu and Xue, Jing-Hao and Wu, Baoyuan},
  journal={arxiv preprint arxiv: 2104.08910},
  year={2021}
}

About

[CVPR 2021] Pytorch implementation for TediGAN: Text-Guided Diverse Face Image Generation and Manipulation

https://arxiv.org/abs/2012.03308

License:MIT License


Languages

Language:Python 99.7%Language:Cuda 0.3%Language:C++ 0.0%