This is a PyTorch implementation of the paper: Active Neural Topological Mapping for Multi-Agent Exploration
Project Website: https://sites.google.com/view/mantm
Install basical dependencies as following,
sudo apt-get update || true
# These are fairly ubiquitous packages and your system likely has them already,
# but if not, let's get the essentials for EGL support:
sudo apt-get install -y --no-install-recommends libjpeg-dev libglm-dev libgl1-mesa-glx libegl1-mesa-dev mesa-utils xorg-dev freeglut3-dev
pip install torch==1.5.1+cu101 torchvision==0.6.1+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install wandb icecream setproctitle gym seaborn tensorboardX slackweb psutil slackweb pyastar2d einops ifcfg tsp torch_geometric
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple magnum scikit-image==0.17.2 lmdb scikit-learn==0.24.1 scikit-fmm yacs imageio-ffmpeg numpy-quaternion numba tqdm gitpython attrs==19.1.0 tensorboard
Under root directory of this repository, run
pip install -e .
We use a modified version of habitat-sim and habitat-lab, so please follow our instructions to set up habitat simulator.
git submodule update --init --recursive
cd habitat/habitat-sim
./build.sh --headless # make sure you use sh file!!!!!!
cd habitat/habitat-lab
pip install -e .
# if you failed to install habitat-api, you can use `build.sh --headless` instead.
Remember to add PYTHONPATH in your ~/.bashrc file:
export PYTHONPATH=$PYTHONPATH:/PATH_TO_THIS_PROJECT/mantm/onpolicy/envs/habitat/habitat-sim/
Please download the Gibson 3D indoor dataset following instructions from here.
The dataset should be put in the directory onpolicy/envs/habitat/data
in following format:
scene_datasets/
gibson/
Adrian.glb
Adrian.navmesh
...
datasets/
pointnav/
gibson/
v1/
train/
val/
...
For Nerual SLAM module and Local Policy, download the models via
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1A1s_HNnbpvdYBUAiw2y1JmmELRLfAJb8' -O pretrained_models/model_best.local;
wget --no-check-certificate 'https://drive.google.com/uc?export=download&id=1o5OG7DIUKZyvi5stozSqRpAEae1F2BmX' -O pretrained_models/model_best.slam;
You could start training with by running sh train_graph_habitat.sh
in directory onpolicy/scripts.
Similar to training, you could run sh render_graph_habitat.sh
in directory onpolicy/scripts to start evaluation. Remember to set up your path to the cooresponding model, correct hyperparameters and related evaluation parameters.
We also provide our implementations of planning-based baselines. You could run sh eval_habitat_ft.sh
to evaluate the planning-based methods. Note that algorithm_name
determines the method to make global planning. It can be set to one of ft_rrt
, ft_apf
, ft_nearest
and ft_utility
.
You could also visualize the result and generate gifs by adding --use_render
and --save_gifs
to the scripts.
If you find this repository useful, please cite our paper:
@misc{yang2023active,
title={Active Neural Topological Mapping for Multi-Agent Exploration},
author={Xinyi Yang and Yuxiang Yang and Chao Yu and Jiayu Chen and Jingchen Yu and Haibing Ren and Huazhong Yang and Yu Wang},
year={2023},
eprint={2311.00252},
archivePrefix={arXiv},
primaryClass={cs.RO}
}