deligentfool / COLA_MADDPG

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

COLA-MADDPG

PyTorch Implementation of COLA-MADDPG based on MADDPG-pytorch.

Requirements

How to Run

All training code is contained within main.py. To view options simply run:

python main.py --help

For vanilla MADDPG:

python main.py simple_tag_coop examplemodel --n_episodes 20000

For COLA-MADDPG:

python main.py simple_tag_coop examplemodel --n_episodes 20000 --consensus

Multi-agent Particle Env

  • To install, cd into multiagent-particle-envs directory and type pip install -e .

  • To interactively view moving to landmark scenario (see others in ./scenarios/): bin/interactive.py --scenario simple.py

  • Known dependencies: OpenAI gym, numpy

  • To use the environments, look at the code for importing them in make_env.py.

Scenarios

The three scenarios we used in the paper are simple_tag_coop, simple_spread, and simple_reference_no_comm. They correspond to "Cooperative Predator-Prey", "Cooperative Navigation" and "Cooperative Pantomime" in the text, respectively.

Acknowledgements

The OpenAI baselines Tensorflow implementation and Ilya Kostrikov's Pytorch implementation of DDPG were used as references. After the majority of this codebase was complete, OpenAI released their code for MADDPG, and I made some tweaks to this repo to reflect some of the details in their implementation (e.g. gradient norm clipping and policy regularization).

About

License:MIT License


Languages

Language:Python 100.0%