vlejd / improved_wgan_training

Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Improved Training of Wasserstein GANs

This is a project test Wasserstein GAN objectives on language. The code is built on a fork of the popular project under the same title.

We try to reproduce results from their paper. We clean their code for language generation, try smaller datasets, standard preprocessing and slightly different architectures.

We striped a lot of unused code to better understand the rest.

Datasets

You can download Download Google Billion Word at [http://www.statmt.org/lm-benchmark/] . Other datasets are available at [https://drive.google.com/drive/folders/0B7MLuc1jq3A8eFpVWUZ0eDEwdlE?usp=sharing]

Prerequisites

  • Python, NumPy, TensorFlow, SciPy, Matplotlib
  • A recent NVIDIA GPU

Important files

Most important is python gan_language.py: Character-level language model. It has help. Specify directory ith Improved WGAN.ipynb : jupyter notebook with my attemt to code it in keras. Improved WGAN.py : Improved WGAN.ipynb exported to pure python script. directories cuted, cuted_small kanye quora and romeo contains graphs for given datasets and some sample generated texts.

About

Code for reproducing experiments in "Improved Training of Wasserstein GANs"

License:MIT License


Languages

Language:Python 46.8%Language:Jupyter Notebook 30.1%Language:TeX 23.1%