mfeldman143 / BicycleGAN

[NIPS 2017] Toward Multimodal Image-to-Image Translation

Home Page:https://junyanz.github.io/BicycleGAN/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool





BicycleGAN

[Project Page] [Paper] [Demo Video]

Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our model is able to synthesize possible day images with different types of lighting, sky and clouds.

Toward Multimodal Image-to-Image Translation.
Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, Eli Shechtman.
UC Berkeley and Adobe Research
In NIPS, 2017.

Example results

Prerequisites

  • Linux or macOS
  • Python 2 or 3
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Installation

  • Clone this repo:
git clone -b master --single-branch https://github.com/junyanz/BicycleGAN.git
cd BicycleGAN

For pip users:

bash ./scripts/install_pip.sh

For conda users:

bash ./scripts/install_conda.sh

Use a Pre-trained Model

  • Download some test photos (e.g. edges2shoes):
bash ./datasets/download_testset.sh edges2shoes
  • Download a pre-trained model (e.g. edges2shoes):
bash ./pretrained_models/download_model.sh edges2shoes
  • Generate results with the model
bash ./scripts/test_shoes.sh

The test results will be saved to a html file here: ./results/edges2shoes/val/index.html.

  • Generate results with synchronized latent vectors
bash ./scripts/test_shoes.sh --sync

Results can be found at ./results/edges2shoes/val_sync/index.html.

Generate Morphing Videos

  • We can also produce a morphing video similar to this GIF and Youtube video.
bash ./scripts/video_shoes.sh

Results can be found at ./videos/edges2shoes/.

Model Training

Coming soon!

Currently, we are working on merging our internal code with the public pix2pix/CycleGAN codebase, and retraining the models with the new code.

Datasets (from pix2pix)

Download the datasets using the following script. Many of the datasets are collected by other researchers. Please cite their papers if you use the data.

  • Download the testset.
bash ./datasets/download_testset.sh dataset_name
  • Download the training and testset.
bash ./datasets/download_dataset.sh dataset_name

Models

Download the pre-trained models with the following script.

bash ./pretrained_models/download_model.sh model_name
  • edges2shoes (edge -> photo): trained on UT Zappos50K dataset.

More models are coming soon!

Citation

If you find this useful for your research, please use the following.

@incollection{zhu2017multimodal,
	title = {Toward Multimodal Image-to-Image Translation},
	author = {Zhu, Jun-Yan and Zhang, Richard and Pathak, Deepak and Darrell, Trevor and Efros, Alexei A and Wang, Oliver and Shechtman, Eli},
	booktitle = {Advances in Neural Information Processing Systems 30},
	year = {2017},
}

Acknowledgements

This code borrows heavily from the pytorch-CycleGAN-and-pix2pix repository.

About

[NIPS 2017] Toward Multimodal Image-to-Image Translation

https://junyanz.github.io/BicycleGAN/

License:Other


Languages

Language:Python 94.1%Language:Shell 4.2%Language:TeX 1.7%