This repository contains all code and experiments for competitive policy gradient (CoPG) algorithm. The paper for competitive policy gradient can be found here, The code for Trust Region Competitive Policy Optimization (TRCPO) algorithm can be found here.
Experiment videos are available here
- Code is tested on python 3.5.2.
- Only Markov Soccer experiment requires OpenSpiel library, Other 5 experiments can be run directly.
- Require torch.utils.tensorboard
.
├── notebooks
│ ├── LQ_game.ipynb
│ ├── bilinear_game.ipynb
│ ├── RockPaperScissors.ipynb
│ ├── matching_pennies.ipynb
│ ├── MarkovSoccer.ipynb
│ ├── CarRacing.ipynb
├── game # Each game have a saparate folder with this structure
│ ├── game.py
│ ├── copg_game.py
│ ├── gda_game.py
│ ├── network.py
│ ├── pretrained_models.py (if applicable)
│ ├── results.py (if applicable)
├── copg_optim
│ ├── copg.py
│ ├── critic_functions.py
│ ├── utils.py
├── car_racing_simulator
└── ...
- Jupyter notebooks are the best point to start. It contains demonstrations and results.
- Folder copg_optim contains optimization code
Open jupyter notebook and run it to see results.
or
git clone "adress"
cd copg
cd RockPaperScissors
python3 copg_rps.py
cd ..
cd tensorboard
tensordboard --logdir .
You can check results in the tensorboard.