RuiShu / deep-generative-models

Deep generative models in Tensorflow

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep Generative Models

This repository is provides a standardized implementation framework for various popular decoder-based deep generative models. The following models are currently implemented

  1. VAE
  2. VAE (autoregressive inference)

In all cases, posterior regularization is applied to disentangle style (z) from content/label (y) on the SVHN dataset.

TODO:

  1. AC-GAN/InfoGAN (see this repo)
  2. BEGAN (see this repo)
  3. WGAN

Dependencies

You'll need

tensorflow==1.1.0
scipy==0.19.0
tensorbayes==0.3.0
tensorflow==1.4.0

Run models

All execution scripts adhere to the following format

python run_*.py --cmd

A list of possible commandline arguments can be found in each run_*.py script. The default commandline arguments sets up a semi-supervised regime where only 1000 of the training samples are labeled. This means posterior regularization is done in a semi-supervised manner. In the case of VAE, pay attention to the choice of encoder/decoder architecture controlled by the argument --design, as this influences whether autoregression is applied during inference. Tensorboard logs are automatically saved to ./log/ and models are saved to ./checkpoints/.

Results

z-space and y-space interpolations provided respectively.

VAE

VAE (autoregressive inference)

No noticeable difference from vanilla VAE. I'd be curious to see if top-down inference makes a difference.

About

Deep generative models in Tensorflow

License:MIT License


Languages

Language:Python 82.3%Language:Jupyter Notebook 17.7%