ymd-h / cpprb

Fast Flexible Replay Buffer Library (Mirror repository of https://gitlab.com/ymd_h/cpprb)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

cpprb

https://img.shields.io/gitlab/pipeline/ymd_h/cpprb.svg https://img.shields.io/pypi/v/cpprb.svg https://img.shields.io/pypi/l/cpprb.svg https://img.shields.io/pypi/status/cpprb.svg https://gitlab.com/ymd_h/cpprb/badges/master/coverage.svg

https://img.shields.io/pypi/dd/cpprb.svg https://img.shields.io/pypi/dw/cpprb.svg https://img.shields.io/pypi/dm/cpprb.svg

https://ymd_h.gitlab.io/cpprb/images/favicon.png

1 Overview

cpprb is a python (CPython) module providing replay buffer classes for reinforcement learning.

Major target users are researchers and library developers.

You can build your own reinforcement learning algorithms together with your favorite deep learning library (e.g. TensorFlow, PyTorch).

cpprb forcuses speed, flexibility, and memory efficiency.

By utilizing Cython, complicated calculations (e.g. segment tree for prioritized experience replay) are offloaded onto C++. (The name cpprb comes from “C++ Replay Buffer”.)

In terms of API, initially cpprb referred to OpenAI Baselines’ implementation. The current version of cpprb has much more flexibility. Any NumPy compatible types of any numbers of values can be stored (as long as memory capacity is sufficient). For example, you can store the next action and the next next observation, too.

2 Installation

cpprb requires following softwares before installation.

  • C++17 compiler (for installation from source)
  • Python 3
  • pip

Additionally, here are user’s good feedbacks for installation at Ubuntu. (Thanks!)

2.1 Install from PyPI (Recommended)

The following command installs cpprb together with other dependencies.

pip install cpprb

Depending on your environment, you might need sudo or --user flag for installation.

On supported platflorms (Linux x86-64, Windows amd64, and macOS x86_64), binary packages hosted on PyPI can be used, so that you don’t need C++ compiler. On the other platforms, such as 32bit or arm-architectured Linux and Windows, you cannot install from binary, and you need to compile by yourself. Please be patient, we plan to support wider platforms in future.

If you have any troubles to install from binary, you can fall back to source installation by passing --no-binary option to the above pip command. (In order to avoid NumPy source installation, it is better to install NumPy beforehand.)

pip install numpy
pip install --no-binary cpprb

2.2 Install from source code

First, download source code manually or clone the repository;

git clone https://gitlab.com/ymd_h/cpprb.git

Then you can install in the same way;

cd cpprb
pip install .

For this installation, you need to convert extended Python (.pyx) to C++ (.cpp) during installation, it takes longer time than installation from PyPI.

3 Usage

3.1 Basic Usage

Basic usage is following step;

  1. Create replay buffer (ReplayBuffer.__init__)
  2. Add transitions (ReplayBuffer.add)
    1. Reset at episode end (ReplayBuffer.on_episode_end)
  3. Sample transitions (ReplayBuffer.sample)

3.2 Example Code

Here is a simple example for storing standard environment (aka. obs, act, rew, next_obs, and done).

from cpprb import ReplayBuffer

buffer_size = 256
obs_shape = 3
act_dim = 1
rb = ReplayBuffer(buffer_size,
                  env_dict ={"obs": {"shape": obs_shape},
                             "act": {"shape": act_dim},
                             "rew": {},
                             "next_obs": {"shape": obs_shape},
                             "done": {}})

obs = np.ones(shape=(obs_shape))
act = np.ones(shape=(act_dim))
rew = 0
next_obs = np.ones(shape=(obs_shape))
done = 0

for i in range(500):
    rb.add(obs=obs,act=act,rew=rew,next_obs=next_obs,done=done)

    if done:
        # Together with resetting environment, call ReplayBuffer.on_episode_end()
        rb.on_episode_end()

batch_size = 32
sample = rb.sample(batch_size)
# sample is a dictionary whose keys are 'obs', 'act', 'rew', 'next_obs', and 'done'

3.3 Construction Parameters

(See also API reference)

NameTypeOptionalDiscription
sizeintNoBuffer size
env_dictdictYes (but unusable)Environment definition (See here)
next_ofstr or array-like of strYesMemory compression (See here)
stack_compressstr or array-like of strYesMemory compression (See here)
default_dtypenumpy.dtypeYesFall back data type
NstepdictYesNstep configuration (See here)
mmap_prefixstrYesmmap file prefix (See here)

3.4 Notes

Flexible environment values are defined by env_dict when buffer creation. The detail is described at document.

Since stored values have flexible name, you have to pass to ReplayBuffer.add member by keyword.

4 Features

cpprb provides buffer classes for building following algorithms.

Algorithmscpprb classPaper
Experience ReplayReplayBufferL. J. Lin
Prioritized Experience ReplayPrioritizedReplayBufferT. Schaul et. al.
Multi-step (Nstep) LearningReplayBuffer, PrioritizedReplayBuffer
Multiprocess Learning (Ape-X)MPReplayBuffer MPPrioritizedReplayBufferD. Horgan et. al.
Large Batch Experience Replay (LaBER)LaBERmean, LaBERlazy, LaBERmaxT. Lahire et al.
Reverse Experience Replay (RER)ReverseReplayBufferE. Rotinov
Hindsight Experience Replay (HER)HindsightReplayBufferM. Andrychowicz et al.

cpprb features and its usage are described at following pages:

5 Design

5.1 Column-oriented and Flexible

One of the most distinctive design of cpprb is column-oriented flexibly defined transitions. As far as we know, other replay buffer implementations adopt row-oriented flexible transitions (aka. array of transition class) or column-oriented non-flexible transitions.

In deep reinforcement learning, sampled batch is divided into variables (i.e. obs, act, etc.). If the sampled batch is row-oriented, users (or library) need to convert it into column-oriented one. (See doc, too)

5.2 Batch Insertion

cpprb can accept addition of multiple transitions simultaneously. This design is convenient when batch transitions are moved from local buffers to a global buffer. Moreover it is more efficient because of not only removing pure-Python for loop but also suppressing unnecessary priority updates for PER. (See doc, too)

5.3 Minimum Dependency

We try to minimize dependency. Only NumPy is required during its execution. Small dependency is always preferable to avoid dependency hell.

6 Contributing to cpprb

Any contribution are very welcome!

6.1 Making Community Larger

Bigger commumity makes development more active and improve cpprb.

6.2 Q & A at Forum

When you have any problems or requests, you can check Discussions on GitHub.com. If you still cannot find any information, you can post your own.

We keep issues on GitLab.com and users are still allowed to open issues, however, we mainly use the place as development issue tracker.

6.3 Merge Request (Pull Request)

cpprb follows local rules:

  • Branch Name
    • “HotFix_***” for bug fix
    • “Feature_***” for new feature implementation
  • docstring
  • Unit Test
    • Put test code under “test/” directory
    • Can test by python -m unittest <Your Test Code> command
    • Continuous Integration on GitLab CI configured by .gitlab-ci.yaml
  • Open an issue and associate it to Merge Request

Step by step instruction for beginners is described at here.

7 Links

7.1 cpprb sites

7.2 cpprb users’ repositories

keiohta/TF2RL
TensorFlow2.x Reinforcement Learning

7.3 Example usage at Kaggle competition

7.4 Japanese Documents

8 License

cpprb is available under MIT license.

9 Citation

We would be very happy if you cite cpprb in your papers.

@misc{Yamada_cpprb_2019,
author = {Yamada, Hiroyuki},
month = {1},
title = {{cpprb}},
url = {https://gitlab.com/ymd_h/cpprb},
year = {2019}
}

About

Fast Flexible Replay Buffer Library (Mirror repository of https://gitlab.com/ymd_h/cpprb)

License:MIT License


Languages

Language:Python 58.8%Language:Cython 27.9%Language:C++ 11.0%Language:CSS 1.2%Language:HTML 0.5%Language:JavaScript 0.3%Language:Dockerfile 0.2%Language:Shell 0.1%Language:Emacs Lisp 0.1%