manuelsh / banana-navigator

Implementation of a deep reinforcement learning agent catching the right bananas.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Banana Navigator

Introduction

This repository contains the implementation of a deep reinforcement learning agent, trained with a deep Q-Network algorithm similar to the one of the Nature paper Playing Atari with Deep Reinforcement Learning. The objective of the agent is to solve one of the Unity's environment, in concrete the "Banana Collector" environment.

banana collector environment

In this environment the agent moves to collect bananas. It will receive a positive reward each time the agent collects a yellow banana, and a negative reward if it collects a blue banana. See more information in the section Environment details.

The agent is implemented and trained in the notebook Navigation.ipynb where the agent uses as state input the processed information given by the environment (e.g. distance to bananas, speed, direction, etc).

For a report on the results, check the following link.

Environment details

The agent will interact with a simplified version of the Banana Collector environment.

The following has been extracted from the materials of the Udacity Deep Reinforcement Learning course.

Rewards

A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.

State space

The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around the agent's forward direction.

Action space

Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:

0 - move forward. 1 - move backward. 2 - turn left. 3 - turn right.

Goal of the task

The task is episodic, and in order to solve the environment, your agent must get an average score of +13 over 100 consecutive episodes.

Requirements installation

To be able to run the notebooks, one needs to prepare the environment and download the Unity environment.

Preparing the environment

As described in the Udacity github repo, to set up your python environment, follow the instructions below.

  1. Create (and activate) a new environment with Python 3.6.

    • Linux or Mac:
    conda create --name drlnd python=3.6
    source activate drlnd
    • Windows:
    conda create --name drlnd python=3.6 
    activate drlnd
  2. Follow the instructions in this repository to perform a minimal install of OpenAI gym.

    • Next, install the classic control environment group by following the instructions here.
    • Then, install the box2d environment group by following the instructions here.
  3. Clone the repository (if you haven't already!), and navigate to the python/ folder. Then, install several dependencies.

git clone https://github.com/udacity/deep-reinforcement-learning.git
cd deep-reinforcement-learning/python
pip install .
  1. Create an IPython kernel for the drlnd environment.
python -m ipykernel install --user --name drlnd --display-name "drlnd"
  1. Before running code in a notebook, change the kernel to match the drlnd environment by using the drop-down Kernel menu.

Donwload the Unity Environment

Select and download the environment that matches your operating system:

Linux: click here Mac OSX: click here Windows (32-bit): [click here(]https://s3-us-west-1.amazonaws.com/udacity-drlnd/P1/Banana/Banana_Windows_x86.zip) Windows (64-bit): click here Then, place the file in the p1_navigation/ folder in the DRLND GitHub repository, and unzip (or decompress) the file.

(For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system.

(For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the "headless" version of the environment. You will not be able to watch the agent without enabling a virtual screen, but you will be able to train the agent. (To watch the agent, you should follow the instructions to enable a virtual screen, and then download the environment for the Linux operating system above.)

Instructions to train the agent

Once your environment is set-up, just run the notebooks Navigation.ipynb.

About

Implementation of a deep reinforcement learning agent catching the right bananas.

License:MIT License


Languages

Language:Jupyter Notebook 76.2%Language:Python 23.8%