RuiShu / vae-experiments

Code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. Atari video prediction is work-in-progress.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

vae-experiments

Here is the code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. The code uses Torch7 which can be installed here

System requirements

  • All experiments were run on GPU with the following libraries
    • cuda/7.5
    • cuDNN/v4
    • hdf5
    • nccl

Required torch7 libraries

  • nn. Building neural networks.
  • nngraph. Building graph-based neural networks.
  • optim. Various gradient descent parameter update methods.
  • cunn. Provides CUDA support for nn.
  • cudnn. Provides CUDNN support for nn.
  • torch-hdf5. HDF5 interace for torch.
  • lfs. Luafilesystem for file manipulation.
  • penlight. Commandline argument parser.
  • image. Provides support for reading images.
  • threads. For multi-threaded data loading.

About

Code for some of the experiments I did with variational autoencoders on multi-modality and atari video prediction. Atari video prediction is work-in-progress.


Languages

Language:Lua 100.0%