OTTFFIVE / ObVi-SLAM

Long-Term Object Visual SLAM

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ObVi-SLAM

ObVi-SLAM is a joint object-visual SLAM approach aimed at long-term multi-session robot deployments.

[Paper with added appendix] [Video]

Offline execution instructions coming soon. ROS implementation coming late 2023/early 2024.

Please email amanda.adkins4242@gmail.com with any questions!

Evaluation

For information on how to set up and run the comparison algorithms, see our evaluation repo.

Installation Instructions

TODO

  • dockerfile version (recommended)
  • native version

Minimal Execution Instructions

TODO

  • Explain files needed and their structure (intrinsics, extrinsics, visual features, bounding box (opt), images?,
  • Explain how to run given these files

Results from ROS bag sequence

TODO (Taijing, start here)

  • Explain how to preprocess rosbag to get the data needed for minimal execution above

Configuration File Guide

TODO

  • Explain how to modify configuration file -- which parameters will someone need to modify for different environment, (lower priority): explain each of the parameters in the config file

Evaluation

For our experiments, we used YOLOv5 (based on this repo) with this model.

We used detections with labels 'lamppost', 'treetrunk', 'bench', and 'trashcan' with this configuration file.

Please contact us if you would like to obtain the videos on which we performed the evaluation.

TODOs

  • Add installation instructions
  • Add offline execution instructions

About

Long-Term Object Visual SLAM

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:C++ 78.9%Language:Python 14.8%Language:Shell 4.8%Language:CMake 1.4%Language:Makefile 0.1%