zmonoid / nips2015-action-conditional-video-prediction

Implementation of "Action-Conditional Video Prediction using Deep Networks in Atari Games"

Home Page:https://sites.google.com/a/umich.edu/junhyuk-oh/action-conditional-video-prediction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Introduction

This repository implements the main algorithm of the following paper (Project website):

  • Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, Satinder Singh, "Action-Conditional Video Prediction using Deep Networks in Atari Games" In Advances in Neural Information Processing Systems (NIPS), 2015.
@incollection{NIPS2015_5859,
author = {Oh, Junhyuk and Guo, Xiaoxiao and Lee, Honglak and Lewis, Richard L and Singh, Satinder},
booktitle = {Advances in Neural Information Processing Systems 28},
editor = {Cortes, C and Lawrence, N D and Lee, D D and Sugiyama, M and Garnett, R and Garnett, R},
pages = {2845--2853},
publisher = {Curran Associates, Inc.},
title = {{Action-Conditional Video Prediction using Deep Networks in Atari Games}},
year = {2015}
}

Install and RUN

  1. Use our bug free container: docker pull zhoubinxyz/caffe-cu10

  2. Clone my version: git clone zmonoid/nips2015-action-conditional-video-prediction

  3. Get the container terminal: cd nips2015-action-conditional-video-prediction; nvidia-docker run -it --volume "./":/workspace zhoubinxyz/caffe-cu10

  4. Generate and prepare dataset: 4.1 cd breakout; mkdir test train; python generate_images.py change stage=train in generate_images.py to generate different train dataset. 4.2 python make_list.py to make image list. Similarly change to glob.glob('test/*/*.png') to make test image list. 4.3 convert_imageset ./ img_list.txt images to convert to lmdb dataset. 4.4 compute_image_mean images mean.binaryproto to compute image mean.

  5. Train the model: ../train_cnn.sh 4 0


Data structure

The data directories should be organized as follows:

./[game name]/train/[%04d]/[%05d].png  # training images
./[game name]/train/[%04d]/act.log     # training actions
./[game name]/test/[%04d]/[%05d].png   # testing images
./[game name]/test/[%04d]/act.log      # testing actions
./[game name]/mean.binaryproto         # mean pixel image

[%04d] and [%05d] correspond to episode index and frame index respectively (starting from 0).
Each line of act.log file specifies the action index (starting from 0) chosen by the player for each time step.

[action idx at time 0]
[action idx at time 1]
[action idx at time 2]
...

The mean pixel values should be computed over the entire training images and be converted to binaryproto using Caffe.

Training

The following scripts are provided for training:

  • train_cnn.sh : train a feedforward model on 1-step, 3-step, 5-step objectives.
  • train_lstm.sh : train a recurrent model on 1-step, 3-step, 5-step objectives.
  • train.sh : train any types of models with user-specified details (batch_size, pre-trained weights, etc)

The following command shows how to run training scripts:

cd [game name]
../train_cnn.sh [num_actions] [gpu_id]
../train_lstm.sh [num_actions] [gpu_id]
../train.sh [model_type] [result_prefix] [lr] [num_act] [...]

Testing

The following scripts are provided for testing:

  • test_cnn.sh : shows predictions from a trained feedforward model.
  • test_lstm.sh : shows predictions from a trained recurrent model.
  • test.sh : shows predictions from a trained model with user-specified details

The following command shows how to run the testing script:

cd [game name]
../test_cnn.sh [weights] [num_actions] [num_step] [gpu_id]
../test_lstm.sh [weights] [num_actions] [num_step] [gpu_id]
../test.sh [model_type] [weights] [num_action] [num_input_frames] [num_step] [gpu_id] [...]
  • If line 31 of test.py gives an error, you have to replace the default font path with a path for any fonts
font = ImageFont.truetype('[path for a font]', 20)

Details

This repository uses ADAM optimization method, while RMSProp is used in the original paper. We found that ADAM converges more quickly, and 3-step training is almost enough to get reasonable results.

About

Implementation of "Action-Conditional Video Prediction using Deep Networks in Atari Games"

https://sites.google.com/a/umich.edu/junhyuk-oh/action-conditional-video-prediction


Languages

Language:Python 89.4%Language:Shell 10.6%