morningsky / pysc2-rl-agent

(D)RL Agent For PySC2 Environment. Close replication of DeepMind's SC2LE paper architecture.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

(D)RL Agent For PySC2 Environment

MoveToBeacon CollectMineralShards DefeatRoaches DefeatZerglingsAndBanelings FindAndDefeatZerglings CollectMineralsAndGas BuildMarines

Introduction

Aim of this project is two-fold:

a.) Reproduce baseline DeepMind results by implementing RL agent (A2C) with neural network model architecture as close as possible to what is described in [1]. This includes embedding categorical (spatial-)features into continuous space with 1x1 convolution and multi-head policy, supporting actions with variable arguments (both spatial and non-spatial).

b.) Improve the results and/or sample efficiency of the baseline solution. Either with alternative algorithms (such as PPO [2]), using reduced set of features (unified across all mini-games) or alternative approaches, such as HRL [3] or Auxiliary Tasks [4].

A video of the trained agent on all minigames can be seen here: https://youtu.be/gEyBzcPU5-w

Running

  • To train an agent, execute python main.py --envs=1 --map=MoveToBeacon.
  • To resume training from last checkpoint, specify --restore flag
  • To run in inference mode, specify --test flag
  • To change number of rendered environments, specify --render= flag
  • To change state/action space, specify path to a json config with --cfg_path=. The configuration with reduced feature space used to achieve some of the results above is:
{
  "feats": {
    "screen": ["visibility_map", "player_relative", "unit_type", "selected", "unit_hit_points_ratio", "unit_density"],
    "minimap": ["visibility_map", "camera", "player_relative", "selected"],
    "non_spatial": ["player", "available_actions"]
  }
}

Requirements

Good GPU and CPU are recommended, especially for full state/action space.

Results

These results are gathered with full feature / action config on 32 agents x 16 n-steps.

Map This Agent DeepMind Human
MoveToBeacon 26.3 ± 0.5 26 28
CollectMineralShards 106 ± 4.3 103 177
DefeatRoaches 147 ± 38.7 100 215
DefeatZerglingsAndBanelings 230 ± 106.4 62 727
FindAndDefeatZerglings 43 ± 5 45 61
CollectMineralsAndGas 3340 ± 185 3978 7566
BuildMarines 0.55 ± 0.25 3 133

Learning Curves

Below are screenshots of TensorBoard views of agents learning curves for each minigame. Each curve represents a different random seed run. Here y-axis represents episode cumulative score and x-axis - number of updates. Each update contains 512 samples (32 agents x 16 n-steps).

MoveToBeacon

MoveToBeacon

CollectMineralShards

CollectMineralShards

DefeatRoaches

DefeatRoaches

DefeatZerglingsAndBanelings

DefeatZerglingsAndBanelings

CollectMineralsAndGas

CollectMineralsAndGas

Related Work

Authors of xhujoy/pysc2-agents and pekaalto/sc2aibot were the first to attempt replicating [1] and their implementations were used as a general inspiration during development of this project, however their aim was more towards replicating results than architecture, missing key aspects, such as full feature and action space support. Authors of simonmeister/pysc2-rl-agents also aim to replicate both results and architecture, though their final goals seem to be in another direction. Their policy implementation was used as a loose reference for this project.

Acknowledgements

Work in this repository was done as part of bachelor's thesis at University of Tartu under the supervision of Ilya Kuzovkin and Tambet Matiisen.

References

[1] StarCraft II: A New Challenge for Reinforcement Learning
[2] Proximal Policy Optimization Algorithms
[3] Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation
[4] Reinforcement Learning with Unsupervised Auxiliary Tasks

About

(D)RL Agent For PySC2 Environment. Close replication of DeepMind's SC2LE paper architecture.

License:MIT License


Languages

Language:Python 100.0%