ofirnachum / d4rl

A benchmark for offline reinforcement learning.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

D4RL: Datasets for Deep Data-Driven Reinforcement Learning

License

Status: Beta (stable release expected June 2020)

D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms. A supplementary whitepaper and website are also available.

Setup

D4RL can be installed by cloning the repository as follows:

git clone https://github.com/rail-berkeley/d4rl.git
cd d4rl
pip install -e .

Or, alternatively:

pip install git+https://github.com/rail-berkeley/d4rl@master#egg=d4rl

The control environments require MuJoCo as a dependency. You may need to obtain a license and follow the setup instructions for mujoco_py. This mostly involves copying the key to your MuJoCo installation folder.

Using d4rl

d4rl uses the OpenAI Gym API. Tasks are created via the gym.make function. A full list of all tasks is available here.

Each task is associated with a fixed offline dataset, which can be obtained with the get_dataset method. This method returns a dictionary with observations, actions, rewards, terminals, and infos as keys.

import gym
import d4rl # Import required to register environments

# Create the environment
env = gym.make('maze2d-umaze-v0')

# d4rl abides by the OpenAI gym interface
env.reset()
env.step(env.action_space.sample())

# Each task is associated with a dataset
dataset = env.get_dataset()
print(dataset['observations']) # An N x dim_observation Numpy array of observations

Datasets are automatically downloaded to the ~/.d4rl/datasets directory. If you would like to change the location of this directory, you can set the $D4RL_DATASET_DIR environment variable to the directory of your choosing, or pass in the dataset filepath directly into the get_dataset method.

Acknowledgements

We would like to thank the authors of hand_dapg and gym-minigrid for their implementations of the respective environments.

Citation

Please use the following bibtex for citations:

@misc{fu2020d4rl,
    title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
    author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
    year={2020},
    eprint={2004.07219},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

About

A benchmark for offline reinforcement learning.

License:Apache License 2.0


Languages

Language:Python 98.2%Language:Shell 1.8%