utiasASRL / vtr3

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then closely repeat any part of the network.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Visual Teach & Repeat 3 (VT&R3)

What is VT&R3?

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then closely repeat any part of the network. VT&R3 is designed for easy adaptation to various sensors (camera, lidar, radar, GPS, etc) and robot combinations. So far, we have explored using VT&R3 to perform teach and repeat navigation using a stereo camera, a lidar, or a combination of a stereo camera and GPS.

Software Support

This repository contains active support for the following feature

  • Multi-experience localization with a Stereo Camera
  • Deep-learned visual features with a Stereo Camera
  • LiDAR ICP odometry and localization

With support for so many sensors, the repository has grown quite large. To reduce the required compilation time an environment variable VTR_PIPELINE has been added to allow for the pipeline to be selected at compile time, instead of run time. The supported pipelines are:

  • LIDAR
  • VISION
  • RADAR
  • RADAR_LIDAR

If the variable is unset, then all pipelines will be compiled and the user can select at run time through the config file parameter pipeline.type which pipeline to use.

The primary support version of VTR requires an NVidia Driver with Cuda capabilities. The current Dockerfile requires a CUDA driver capable of supporting 11.7. A GPU is required for all versions of the vision (camera) pipeline and features for LiDAR and RADAR that use PyTorch models for processing.

If no GPU is available a CPU only version is available, but only for LiDAR. Note that the CPU version of TorchLib is installed for easier compilation but the models are unlikely to run fast enough on a CPU to be useful.

Reproducing Results of VT&R3 Papers

VT&R3 related papers usually focus on demonstrating one specific feature of VT&R3 instead of the whole system and require additional scripts to run experiments and evaluate results. Therefore, from now on, we will create a separate repository for each paper with instructions on how to reproduce the results.

Knowing the Codebase

The following articles will help you get familiar with VT&R3:

More information can be found on the wiki page.

Citation

Please cite the following paper when using VT&R3 for your research:

@article{paul2010vtr,
  author = {Furgale, Paul and Barfoot, Timothy D.},
  title = {Visual teach and repeat for long-range rover autonomy},
  journal = {Journal of Field Robotics},
  year = {2010},
  doi = {https://doi.org/10.1002/rob.20342}
}

About

VT&R3 is a C++ implementation of the Teach and Repeat navigation framework. It enables a robot to be taught a network of traversable paths and then closely repeat any part of the network.

License:Apache License 2.0


Languages

Language:C++ 83.1%Language:Cuda 8.3%Language:JavaScript 2.3%Language:C 2.2%Language:MATLAB 1.6%Language:Python 1.2%Language:CMake 1.1%Language:Dockerfile 0.1%Language:HTML 0.0%Language:CSS 0.0%