satpreetsingh / jax-rl

Jax (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Jax (Flax) RL

This repository contains Jax (Flax) implementations of Reinforcement Learning algorithms:

The goal of this repository is to provide simple and clean implementations to build research on top of. Please do not use this repository for baseline results and use the original implementations instead.

Installation

Install and activate an Anaconda environment

conda env create -f environment.yml 
conda activate jax-rl

If you want to run this code on GPU, please follow instructions from the official repository.

Please follow the instructions to build mujoco-py with fast headless GPU rendering.

Run

OpenAI Gym MuJoCo tasks

python train.py --env_name=HalfCheetah-v2 --save_dir=./tmp/

DeepMind Control suite (--env-name=dmc-domain-task)

python train.py --env_name=dmc-cheetah-run --save_dir=./tmp/

For offline RL

python train_offline.py --env_name=halfcheetah-expert-v0  --dataset_name=d4rl --save_dir=./tmp/

For RL finetuning

python train_finetuning.py --env_name=HalfCheetah-v2 --dataset_name=awac --save_dir=./tmp/

Troubleshooting

If you experience out-of-memory errors, especially with enabled video saving, please consider reading docs on Jax GPU memory allocation. Also, you can try running with the following environment variable:

XLA_PYTHON_CLIENT_MEM_FRACTION=0.80 python ...

Tensorboard

Launch tensorboard to see training and evaluation logs

tensorboard --logdir=./tmp/

Results

gym

Contributing

When contributing to this repository, please first discuss the change you wish to make via issue. If you are not familiar with pull requests, please read this documentation.

About

Jax (Flax) implementation of algorithms for Deep Reinforcement Learning with continuous action spaces.

License:MIT License


Languages

Language:Jupyter Notebook 95.3%Language:Python 4.7%Language:Shell 0.0%