Junshuai-Song / Distributed_PPO

This is an pytorch implementation of Distributed Proximal Policy Optimization(DPPO).

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Distributed Proximal Policy Optimization (DPPO)

MIT License
This is an pytorch-version implementation of Emergence of Locomotion Behaviours in Rich Environments. This project is based on Alexis David Jacq's DPPO project. However, it has been rewritten and contains some modifications that appaer to improve learning in some environments. In this code, I revised the Running Mean Filter and this leads to better performance (for example in Walker2D). I also rewrote the code to support the Actor Network and Critic Network separately. This change then allows the creation of asymmetric for some tasks, where the information available at training time is not available at run time. Further, the actions in this project are sampled from a Beta Distribution, leads to better training speed and performance in a large number of tasks.

Requirements

  • python 3.5.2
  • openai gym
  • mujoco-python
  • pytorch-cpu(Please use the CPU(None-CUDA) version!!! --- I will solve the problem in the GPU(CUDA) version later)
  • pyro

Instruction to run the code

Train your models

cd /root-of-this-code/
python train_network.py

You could also try other mujoco's environments. This code has already pre-trained one mujoco environment: Walker2d-v1. You could try it by yourself on your favourite task!

Test your models:

cd /root-of-this-code/
python demo.py

Results

Training Curve

img

Demo: Walker2d-v1

img

Acknowledgement

Reference

[1] Emergence of Locomotion Behaviours in Rich Environments

About

This is an pytorch implementation of Distributed Proximal Policy Optimization(DPPO).

License:MIT License


Languages

Language:Python 100.0%