This codebase accompanies the paper submission "Efficient Episodic Memory Utilization of Cooperative Multi-agent Reinforcement Learning (EMU)" and is based on GRF, PyMARL and SMAC which are open-sourced. The paper is accepted by ICLR2024 and now available in OpenReview and arXiv.
PyMARL is WhiRL's framework for deep multi-agent reinforcement learning and our code includes implementations of the following algorithms:
- QPLEX: Duplex Dueling Multi-Agent Q-Learning
- EMC: Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration
- CDS: Celebrating Diversity in Shared Multi-Agent Reinforcement Learning
Note: Please use the updated configuration file for experiments. We have corrected some errors in the previously uploaded configurations. To train EMU(QPLEX) on SC2 setting tasks, run the following command:
python3 src/main.py --config=EMU_sc2 --env-config=sc2 with env_args.map_name=5m_vs_6m
For EMU(CDS), please change config file to EMU_sc2_cds.
To train EMU(QPLEX) on SC2 setting tasks, run the following command:
python3 src/main.py --config=EMU_grf --env-config=academy_3_vs_1_with_keeper
For EMU(CDS), please change config file to EMU_grf_cds.
If you find this repository useful, please cite our paper:
@inproceedings{na2024efficient,
title={Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning},
author={Na, Hyungho and Seo, Yunkyeong and Moon, Il-chul},
journal={arXiv preprint arXiv:2403.01112},
year={2024}
}