duyongquan / vlmaps

[ICRA2023] Implementation of Visual Language Maps for Robot Navigation

Home Page:https://vlmaps.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

VLMaps

Code style: black Open In Colab License: MIT

Visual Language Maps for Robot Navigation

Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

We present VLMAPs (Visual Language Maps), a spatial map representation in which pretrained visuallanguage model features are fused into a 3D reconstruction of the physical world. Spatially anchoring visual language features enables natural language indexing in the map, which can be used to, e.g., localize landmarks or spatial references with respect to landmarks – enabling zero-shot spatial goal navigation without additional data collection or model finetuning.

Quick Start

Try VLMaps creation and landmark indexing in Open In Colab.

To begin on your own machine, clone this repository locally

git clone https://github.com/vlmaps/vlmaps.git

Install requirements:

$ conda create -n vlmaps python=3.8 -y  # or use virtualenv
$ conda activate vlmaps
$ conda install jupyter -y
$ cd vlmaps
$ bash install.bash

Start the jupyter notebook

$ jupyter notebook demo.ipynb

Benchmark

Citation

If you find the dataset or code useful, please cite:

@inproceedings{huang23vlmaps,
               title={Visual Language Maps for Robot Navigation},
               author={Chenguang Huang and Oier Mees and Andy Zeng and Wolfram Burgard},
               booktitle = {Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)},
               year={2023},
               address = {London, UK}
} 

License

MIT License

About

[ICRA2023] Implementation of Visual Language Maps for Robot Navigation

https://vlmaps.github.io/

License:MIT License


Languages

Language:Jupyter Notebook 99.8%Language:Python 0.2%Language:Shell 0.0%