behzaad / RL_PathPlanning

Path Planning for the Robots Using Reinforcement Learning

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Path Planning for the Robots Using Reinforcement Learning

We have a robot that aims to collect data of several low-powered IoT sensors. As the sensors are low-powered, they cannot communcate over long ranges. Hence, the robot must approach each sensor to collect their data. The robot starts its mission from the start terminal. There is a charging station in the environment so that the robot can recharge its battery if it is running out of energy. Also, there are several obstacles in the environment.

The task of the robot is to collect data of all sensors in the shortest possible time while it avoids any collisions to the obstacles.

In the following image, we have depicted the environment:

Env

red square: starting position

green square: charging station

Black circles: IoT sensors

Blue blocks: obstacles

In this project, we define the state as a four channel image, shown below

img

Based on this definition, we can use CNNs to solve the MDP.

A sample result:

res

Reference

This project is part of my PhD thesis at University of Toronto. If using this code for research purposes, please cite:

Khamidehi B. Aerial Robots for Wireless Coverage, Traffic Monitoring, and Transport Applications: A Path Planning and Fleet Management Perspective (Doctoral dissertation, University of Toronto (Canada)). [Link]

About

Path Planning for the Robots Using Reinforcement Learning


Languages

Language:Jupyter Notebook 100.0%