ucaiado / tennis-rl

Training a pair of agents to play tennis

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tenning RL

In this project, I will solve a multi-agent domain problem, where two agents should collaborate and/or compete to solve the task. I will work with the Tennis environment, where two agents control rackets to bounce a ball over a net.

Example

If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.

The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping.

The task is episodic, and in order to solve the environment, the agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents). The environment is considered solved, when the average (over 100 episodes) of those scores is at least +0.5. This project is part of the Deep Reinforcement Learning Nanodegree program, from Udacity. You can check my report here.

Install

This project requires Python 3.5 or higher, the Tennis Environment (follow the instructions to download here) and the following Python libraries installed:

Run

In a terminal or command window, navigate to the top-level project directory tennis-rl/ (that contains this README) and run the following command:

$ jupyter notebook notebooks/2018-11-07-tranning-maddpg.ipynb

This will open the Jupyter Notebook software and the main notebook in your browser which you can use to explore and reproduce the experiment that has been run to generate the result provided in the report.

Additionally, if you are interested in playing with the parameters, you can navigate to the directory tennis-rl/drlnd/, modify the file config.yaml and run the training from the drlnd folder executing the command below:

$ python trainning.py

References

  1. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., et al. Continuous control with deep reinforcement learning. arXiv.org, 2015.
  2. Lowe, R., WU, Y., Tamar, A., Harb, J., Abbeel, P., and Mordatch, I. Multi- agent actor-critic for mixed cooperative-competitive environments. 2017.
  3. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal Policy Optimization Algorithms. arXiv.org, 2017.

License

The contents of this repository are covered under the MIT License.

About

Training a pair of agents to play tennis

License:MIT License


Languages

Language:Python 63.6%Language:TeX 36.4%