prodo56 / Taxi-Agent

Q-Learning Algorithm to teach a taxi agent to navigate a small grid world

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Taxi Agent

Table of Contents

  1. Project Motivation
  2. Agent Environment
  3. Project Components
  4. Instructions
  5. Results
  6. Licensing, Authors, and Acknowledgements

Project Motivation

In this project, I have used OpenAI Gym's Taxi-v2 environment to teach a taxi agent to navigate a small gridworld using Q-Learning. The goal is to adapt all that you've learned in the previous lessons to solve a new environment!.

Agent Environment

+---------+
|R: | : :G|
| : : : : |
| : : : : |
| | : | : |
|Y| : |B: |
+---------+
  • There are four designated locations in the grid world indicated by R(ed), B(lue), G(reen), and Y(ellow). When the episode starts, the taxi starts off at a random square and the passenger is at a random location. The taxi drive to the passenger's location, pick up the passenger, drive to the passenger's destination (another one of the four specified locations), and then drop off the passenger. Once the passenger is dropped off, the episode ends.

  • There are 500 possible states, corresponding to 25 possible grid locations, 5 locations for the passenger, and 4 destinations.

  • There are 6 possible actions, corresponding to moving North, East, South, or West, picking up the passenger, and dropping off the passenger.

Project Components

There are two components in this project.

  1. Agent

    • agent.py

      • class to store Q-Table values with functions to update Q-Table and select next action
  2. Monitor

    • monitor.py

      • interact function tests how well your agent learns from interaction with the environment.

Instructions

Run main.py.

Results

Episode 20000/20000 || Best average reward 9.2246

Licensing, Authors, and Acknowledgements

License: MIT

About

Q-Learning Algorithm to teach a taxi agent to navigate a small grid world

License:MIT License


Languages

Language:Python 100.0%