mainakpal08 / Resilient-consensus-based-MARL

This repository includes a realization of the resilient projection-based consensus actor-critic algorithm that is resilient to adversarial attacks on communication channels.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Resilient projection-based consensus actor-critic (RPBCAC) algorithm

We implement the RPBCAC algorithm with nonlinear approximation from [1] and focus on training performance of cooperative agents in the presence of adversaries. We aim to validate the analytical results presented in the paper and prevent adversarial attacks that can arbitrarily hurt cooperative network performance including the one studied in [2]. The repository contains folders whose description is provided below:

  1. agents - contains resilient and adversarial agents
  2. environments - contains a grid world environment for the cooperative navigation task
  3. simulation_results - contains plots that show training performance
  4. training - contains functions for training agents

To train agents, execute main.py.

Multi-agent grid world: cooperative navigation

We train five agents in a grid-world environment. Their original goal is to approach their desired position without colliding with other agents in the network. We design a grid world of dimension (6 x 6) and consider a reward function that penalizes the agents for distance from the target and colliding with other agents.

We compare the cooperative network performance under the RPBCAC algorithm with the trimming parameter H=0 and H=1, which corresponds to the number of adversarial agents that are assumed to be present in the network. We consider four scenarios:

  1. All agents are cooperative. They maximize the team-average expected returns.
  2. One agent is greedy as it maximizes its own expected returns. It shares parameters with other agents but does not apply consensus updates.
  3. One agent is faulty and does not have a well-defined objective. It shares fixed parameter values with other agents.
  4. One agent is strategic; it maximizes its own returns and leads the cooperative agents to minimize their returns. The strategic agent has knowledge of other agents' rewards and updates two critic estimates (one critic is used to improve the adversary's policy and the other to hurt the cooperative agents' performance).

The simulation results below demonstrate very good performance of the RPBCAC with H=1 (right) compared to the non-resilient case with H=0 (left). The performance is measured by the episode returns.

1) All cooperative

2) Three cooperative + one greedy

3) Three cooperative + one faulty

4) Three cooperative + one malicious

The folder with resilient agents contains the RPBCAC agent as well as an agent that applies the method of trimmed means in the consensus updates (RTMCAC).

Comparison

TODO

References

[1] Figura, M., Lin, Y., Liu, J., Gupta, V. Resilient Consensus-based Multi-agent Reinforcement Learning with Function Approximation. arXiv preprint arXiv:2111.06776, 2021.

[2] Figura, M., Kosaraju, K. C., and Gupta, V. Adversarial attacks in consensus-based multi-agent reinforcement learning. arXiv preprint arXiv:2103.06967, 2021.

About

This repository includes a realization of the resilient projection-based consensus actor-critic algorithm that is resilient to adversarial attacks on communication channels.


Languages

Language:Python 95.2%Language:Shell 4.8%