vxgu86 / Reinforcement-Learning-With-Unity-G.E.A.R

A prototype of an autonomous agent for garbage collection.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reinforcement Learning With Unity - Garbage Evaporating Autonomous Robot

My Image

This is a project completed by three students (Damian Bogunowicz, Sangram Gupta, HyunJung Jung) in conjunction with the Chair for Computer Aided Medical Procedures & Augmented Reality of the Technical University of Munich. For this project we have been awarded with maximum grade 1.0. We have created a prototype of an autonomous, intelligent agent for garbage collection named G.E.A.R (Garbage Evaporating Autonomous Robot). The goal of the agent is collect relevant pieces of garbage, while avoiding collisions with static objects (such as chairs or tables). The agent navigates in the environment (a mock-up of German Oktoberfest tent) using camera RBG-D input.

The purpose of this project was to broaden our knowledge in the areas of Reinforcement Learning, Game Development and 3D Computer Vision. If you would like to get more familiar with our work, the detailed description of the project in form of a blog post can be found here.

Getting Started

Files and Directories

images: Sample images from the training set, which has been used for training Semantic Segmentation network.

unity-environment: Contains the whole environment, which can be opened in Unity 3D and run (training or inference) using ML-Agents.

ml-agents: Contains the ml-agents library, modified for the purpose of this project.

pre-trained-models: Contains pre-trained models (ML-Agents model or Semantic Segmentation model).

Prerequisites

Before you run the project, you need to install:

这应该是在ML-Agents从0.6版升级以后,

Installing

To run the project:

  1. Open the /unity-environment/gear-unity project using Unity 3D. Additionally, please sure that the ML-Agents Toolkit (including TensorFlowSharp) is correctly installed

You should be welcomed by the following view: My Image

  1. Replace the ml-agents library (in the virtual environment created during ML-Agents toolkit installation) with our ml-agents.

  2. Then, put pre-trained-models/latest_model_Encoder-Decoder-Skip_Dataset.cktp into .../mlagents/trainers/models/. Finally, open the script .../mlagents/trainers/trainer_controller.py and edit lines 313 and 343 by replacing the current path to the SegNet .cpkt files with your path: .../mlagents/trainers/models/latest_model_Encoder-Decoder-Skip_Dataset.cktp (this inelegant workaround can be skipped, if one does not want to use SegNet for training).

G.E.A.R Training & Inference

Setting the Parameters of the Environment

The user can change the parameters of the environment according to his needs. The parameters can be found in Academy. Those are:

  1. Parameters regarding the respawn of static objects (chairs and tables):
  • Table Pos Max Offset
  • Table Rotation Max
  • Chair Pos Max Offset
  • Spawn Area
  • N Spawn Xdir -N Spawn Zdir
  1. Parameters regarding the respawn of items (collectibles and non-collectibles):
  • N Garbages
  • Garbage Spawn Height
  • Ratio Wurst
  • Ratio Bread
  • Ratio Cup
  • N Valuables
  • Valuable Spawn Height

Using PPO and Build-In Semantic Segmentation


This is the default setup for the environment.

Training

To train the robot from the scratch using PPO, simply run the command: mlagents-learn trainer_config.yaml --run-id=test_run --train --slow.

Inference

Set Brain Type in Academy/Brain to Internal. To run the inference and see the robot in action, drag pre-trained-models/PPO.bytes into Graph Model and run the simulation.

Using PPO and Custom Semantic Segmentation


To change from default setup to the one which uses external Semantic Segmentation Network (a SegNet, trained using Semantic Segmentation Suite):

  1. In HuggerAgent under Hugger Agent (Script) change Camera 1 from SegmentationCameraOneHot to RGBCamera.
  2. In Academy/Brain set Element 0/Width to 512 and Element 0/Height to 512. Switch off Element 0/Black And W.

原来是64 64 on

改后运行特别慢

Training

In gear-unity/trainer_config.yaml set segmentation: true. Then, run mlagents-learn trainer_config.yaml --run-id=test_run --train --slow.

Inference

Set Brain Type in Academy/Brain to Internal. To run the inference and see the robot in action, drag pre-trained-models/PPOSegNet.bytes into Graph Model and run the simulation.

Using Imitation Learning


Training

For instructions how to train an agent simply apply steps from the official instruction

Inference

Set Brain Type in Academy/Brain to Internal. To run the inference and see the robot in action, drag pre-trained-models/BC.bytes into Graph Model and run the simulation.

Custom Semantic Segmentation中改的配置要改回来才能运行

Using Heuristic


Training

In gear-unity/trainer_config.yaml set heuristics: true. Then, run mlagents-learn trainer_config.yaml --run-id=test_run --train --slow.

Inference

Set Brain Type in Academy/Brain to Internal. To run the inference and see the robot in action, drag pre-trained-models/Heuristic.bytes into Graph Model and run the simulation.

Custom Semantic Segmentation中改的配置要改回来才能运行

Authors

Acknowledgments

We would like to thank:

About

A prototype of an autonomous agent for garbage collection.


Languages

Language:Python 100.0%