architsharma97 / earl_benchmark

EARL: Environment for Autonomous Reinforcement Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

EARL: Environments for Autonomous Reinforcement Learning

License

EARL is an open-source benchmark for autonomous reinforcement learning, where the agent learns in a continual non-episodic setting without relying on extrinsic interventions for training. The benchmark consists of 6 challenging environments, covering diverse scenarios from dexterous manipulation to locomotion.

For an overview of the problem setting and description of the environments, checkout our website. For more details on Autonomous RL and details about evaluation protocols and baselines, please refer to our ICLR paper.

Setup

EARL can be installed by cloning the repository as follows:

git clone https://github.com/architsharma97/earl_benchmark.git
cd earl_benchmark
pip install -e .

For the environments based on MuJoCo, you need to obtain a (free) license and copy the key into the subdirectory of your MuJoCo installation.

Using environments in EARL

You can load environments by first creating earl_benchmark.EARLEnvs(env). You can then load initial and goal states and demonstrations (if available) as follows.

import earl_benchmark

env_loader = earl_benchmark.EARLEnvs('tabletop_manipulation', reward_type='sparse')
train_env, eval_env = env_loader.get_envs()
initial_states = env_loader.get_initial_states()
goal_states = env_loader.get_goal_states()
forward_demos, reverse_demos = env_loader.get_demonstrations()

Acknowledgements

EARL is built on top of environments built by various researchers. In particular, we would like to thank the authors of:

Disclaimer

The environment repository is WIP. Please contact Archit Sharma if you are planning to use this benchmark and are having trouble.

About

EARL: Environment for Autonomous Reinforcement Learning

License:Apache License 2.0


Languages

Language:Python 100.0%