LondonNode / Pearl

Adaptable tools to make reinforcement learning and evolutionary computation algorithms.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pipeline status codecov codestyle

Pearl

The Parallel Evolutionary and Reinforcement Learning Library (Pearl) is a pytorch based package with the goal of being excellent for rapid prototyping of new adaptive decision making algorithms in the intersection between reinforcement learning (RL) and evolutionary computation (EC). As such, this is not intended to provide template pre-built algorithms as a baseline, but rather flexible tools to allow the user to quickly build and test their own implementations and ideas. A technical report and separate tutorial repo using Google Collab are also included to introduce users to the library.

Main Features

Features Pearl
Model Free RL algorithms (e.g. Actor Critic) ✔️
Model Based RL algorithms (e.g. Dyna-Q) ✔️
EC algorithms (e.g. Genetic Algorithm) ✔️
Hybrid algorithms (e.g. CEM-DDPG) ✔️
Multi-agent suppport ✔️
Tensorboard integration ✔️
Modular and extensible components ✔️
Opinionated module settings ✔️
Custom callbacks ✔️

User Guide

Installation

There are two options to install this package:

  1. pip install pearll
  2. git clone git@github.com:LondonNode/Pearl.git

Module Guide

  • agents: implementations of RL and EC agents where the other modular components are put together
  • buffers: these handle storing and sampling of trajectories
  • callbacks: inject logic for every step made in an environment (e.g. save model, early stopping)
  • common: common methods applicable to all other modules (e.g. enumerations) and a main utils.py file with some useful general logic
  • explorers: action explorers for enhanced exploration by adding noise to actions and random exploration for first n steps
  • models: neural network structures which are structured as encoder -> torso -> head
  • signal_processing: signal processing logic for extra modularity (e.g. TD returns, GAE)
  • updaters: update neural networks and adaptive/iterative algorithms
  • settings.py: settings objects for the above components, can be extended for custom components

Agent Templates

See pearll/agents/templates.py for the templates to create your own agents! For more examples, see specific agent implementations under pearll/agents.

Agent Performance

To see training performance, use the command tensorboard --logdir runs or tensorboard --logdir <tensorboard_log_path> defined in your algorithm class initialization.

Python Scripts

To run these you'll need to go to wherever the library is installed, cd pearll.

  • demo.py: script to run very basic demos of agents with pre-defined hyperparameters, run python3 -m pearll.demo -h for more info
  • plot.py: script to plot more complex plots that can't be obtained via Tensorboard (e.g. multiple subplots), run python3 -m pearll.plot -h for more info

Developer Guide

Scripts

Linux

  1. scripts/setup_dev.sh: setup your virtual environment
  2. scripts/run_tests.sh: run tests

Windows

  1. scripts/windows_setup_dev.bat: setup your virtual environment
  2. scripts/windows_run_tests.bat: run tests

Dependency Management

Pearl uses poetry for dependency management and build release instead of pip. As a quick guide:

  1. Run poetry add [package] to add more package dependencies.
  2. Poetry automatically handles the virtual environment used, check pyproject.toml for specifics on the virtual environment setup.
  3. If you want to run something in the poetry virtual environment, add poetry run as a prefix to the command you want to execute. For example, to run a python file: poetry run python3 script.py.

Credit

Citing Pearl

@misc{tangri2022pearl,
      title={Pearl: Parallel Evolutionary and Reinforcement Learning Library}, 
      author={Rohan Tangri and Danilo P. Mandic and Anthony G. Constantinides},
      year={2022},
      eprint={2201.09568},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Acknowledgements

Pearl was inspired by Stable Baselines 3 and Tonic

About

Adaptable tools to make reinforcement learning and evolutionary computation algorithms.

License:MIT License


Languages

Language:Python 99.6%Language:Shell 0.2%Language:Batchfile 0.2%