pushkalkatara / visual-navigation-agent-pytorch

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning implemented in PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Target-driven 2/3/6 DoF Visual Navigation Model using Deep Reinforcement Learning

This repository tries to extends the work Target-drive Visual Navigation using Deep Reinforcement Learning to support RL agent action space of 2/3/6DoF. The code is in PyTorch and the simulator used is Habitat Sim.

Major changes in the repository:

  • Added support for Habitat Simulator
  • Made changes in configuration to support high degree of freedom (2/3/6DoFs) in action space.

Introduction

This repository provides a Pytorch implementation of the deep siamese actor-critic model for indoor scene navigation introduced in the following paper:

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning
Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi
ICRA 2017, Singapore

Setup and run

This code is implemented in Pytorch 1.4 and uses Habitat as the simulator. Follow steps provided at Habitat-Sim for simulator installation.

In order to start training, run those commands:

git clone https://github.com/pushkalkatara/visual-navigation-agent-pytorch.git
python train.py

Scenes

We use Arkansaw, Ballou, Hillsdale, Roane, Stokes environments from Gibson-Habitat dataset to perform the experiment.

Acknowledgements

I would like to acknowledge the following references that have offered great help for me to implement the model.

License

MIT

About

Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning implemented in PyTorch


Languages

Language:Python 100.0%