greydanus / pderl

Code for "Proximal Distilled Evolutionary Reinforcement Learning", accepted at AAAI 2020

Home Page:https://arxiv.org/abs/1906.09807

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build Status

Proximal Distilled Evolutionary Reinforcement Learning

Official code for the AAAI 2020 paper "Proximal Distilled Evolutionary Reinforcement Learning".

PDERL

Use the following to cite:

@article{Bodnar2019ProximalDE,
  title={Proximal Distilled Evolutionary Reinforcement Learning},
  author={Cristian Bodnar and Ben Day and Pietro Lio'},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.09807}
}

To Run PDERL

First, you will have to install all the dependencies by running pip install -r requirements.txt. Additionally, for installing mujoco-py 2.0.2.2, follow the instructions on the official github.

To run PDERL with proximal mutations and distillation-based crossover use:

python run_pderl.py -env=$ENV_NAME$ -distil -proximal_mut -mut_mag=$MUT_MAG$ -logdir=$LOG_DIR$

To evaluate and visualise a trained model in an environment use:

python play_pderl.py -env=$ENV_NAME$ -model_path=$MODEL_PATH$ -render 

ENVS TESTED

'Hopper-v2'
'HalfCheetah-v2'
'Swimmer-v2'
'Ant-v2'
'Walker2d-v2'

CREDITS

Our code is largely based on the code of Khadka and Tumer and we would like to thank them for making their code publicly available. The proximal mutations code is also relying on the safe mutations code of Lehman et al. from Uber Research.

About

Code for "Proximal Distilled Evolutionary Reinforcement Learning", accepted at AAAI 2020

https://arxiv.org/abs/1906.09807


Languages

Language:Python 100.0%