Radagaisus / noreward-rl

Curiosity-driven Exploration for Deep Reinforcement Learning by Self-supervised Prediction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Curiosity-driven Exploration by Self-supervised Prediction

In ICML 2017 [Project Website] [Demo Video]

Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
University of California, Berkeley

This is the code for our ICML 2017 paper on curiosity-driven exploration for reinforcement learning. Idea is to train agent with intrinsic curiosity-based motivation (ICM) when external rewards from environment are sparse. Surprisingly, you can use ICM even when there are no rewards available from the environment, in which case, agent learns to explore only out of curiosity: 'RL without rewards'. If you find this work useful in your research, please cite:

@inproceedings{pathakICMl17curiosity,
    Author = {Pathak, Deepak and Agrawal, Pulkit and
              Efros, Alexei A. and Darrell, Trevor},
    Title = {Curiosity-driven Exploration by Self-supervised Prediction},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2017}
}

1) Running demo

[To be released very soon. Stay tuned !]

  1. Install required packages in the virtual environment including Tensorflow 0.12:
cd noreward-rl/
virtualenv curiosity
source $PWD/curiosity/bin/activate
pip install -r requirements.txt
  1. Clone the repository and fetch trained policy models:
git clone -b master --single-branch https://github.com/pathak22/noreward-rl.git
cd noreward-rl/
bash ./models/download_models.sh
  1. Run demo:
cd noreward-rl/src/
python demo.py

2) Training code

[To be released soon. Stay tuned !]

About

Curiosity-driven Exploration for Deep Reinforcement Learning by Self-supervised Prediction

License:BSD 2-Clause "Simplified" License