tesslerc / ActionRobustRL

Code accompanying the paper "Action Robust Reinforcement Learning and Applications in Continuous Control" https://arxiv.org/abs/1901.09184

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Action-Robust-Reinforcement-Learning

Code accompanying the paper "Action Robust Reinforcement Learning and Applications in Continuous Control"

Requirements:

Howto train:

python3.6 main.py --updates_per_step 10 --env-name "Hopper-v2" --alpha 0.1 --method pr_mdp

Where method can take 3 values mdp pr_mdp or nr_mdp, where pr/nr are the probabilistic robust and noisy robust as defined in the paper.

All results are saved in the models folder.

Howto evaluate:

Once a model has been trained, run:

python3.6 test.py --eval_type model

where --eval_type model will evaluate for model (mass) uncertainty and --eval_type model_noise will create the 2d visualizations.

Howto visualize:

See Comparison_Plots.ipynb for an example of how to access and visualize your models.

About

Code accompanying the paper "Action Robust Reinforcement Learning and Applications in Continuous Control" https://arxiv.org/abs/1901.09184

License:MIT License


Languages

Language:Jupyter Notebook 98.0%Language:Python 2.0%