This repository tries to extends the work Target-drive Visual Navigation using Deep Reinforcement Learning to support RL agent action space of 2/3/6DoF. The code is in PyTorch and the simulator used is Habitat Sim.
Major changes in the repository:
- Added support for Habitat Simulator
- Made changes in configuration to support high degree of freedom (2/3/6DoFs) in action space.
This repository provides a Pytorch implementation of the deep siamese actor-critic model for indoor scene navigation introduced in the following paper:
Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning
Yuke Zhu, Roozbeh Mottaghi, Eric Kolve, Joseph J. Lim, Abhinav Gupta, Li Fei-Fei, and Ali Farhadi
ICRA 2017, Singapore
This code is implemented in Pytorch 1.4 and uses Habitat as the simulator. Follow steps provided at Habitat-Sim for simulator installation.
In order to start training, run those commands:
git clone https://github.com/pushkalkatara/visual-navigation-agent-pytorch.git
python train.py
We use Arkansaw, Ballou, Hillsdale, Roane, Stokes environments from Gibson-Habitat dataset to perform the experiment.
I would like to acknowledge the following references that have offered great help for me to implement the model.
- "Habitat Baselines"
- "Asynchronous Methods for Deep Reinforcement Learning", Mnih et al., 2016
- David Silver's Deep RL course
- muupan's async-rl repo
- miyosuda's async_deep_reinforce repo
- miyosuda's async_deep_reinforce repo
- Pytorch A3C implementation repo
MIT