There are 49 repositories under visual-odometry topic.
OpenVSLAM: A Versatile Visual SLAM Framework
An unsupervised learning framework for depth and ego-motion estimation from monocular videos
LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping
An Invitation to 3D Vision: A Tutorial for Everyone
[CoRL 21'] TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
Visual odometry package based on hardware-accelerated NVIDIA Elbrus library with world class quality and performance.
Unsupervised Scale-consistent Depth Learning from Video (IJCV2021 & NeurIPS 2019)
A general framework for map-based visual localization. It contains 1) Map Generation which support traditional features or deeplearning features. 2) Hierarchical-Localizationvisual in visual(points or line) map. 3)Fusion framework with IMU, wheel odom and GPS sensors.
Depth and Flow for Visual Odometry
[ICRA'23] The official Implementation of "Structure PLP-SLAM: Efficient Sparse Mapping and Localization using Point, Line and Plane for Monocular, RGB-D and Stereo Cameras"
A simple monocular visual odometry (part of vSLAM) by ORB keypoints with initialization, tracking, local map and bundle adjustment. (WARNING: Hi, I'm sorry that this project is tuned for course demo, not for real world applications !!!)
Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction
A bunch of state estimation algorithms
This repository is C++ OpenCV implementation of Stereo Odometry
Efficient monocular visual odometry for ground vehicles on ARM processors
Learning Depth from Monocular Videos using Direct Methods, CVPR 2018
Implementation of ICRA 2019 paper: Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
MATLAB Implementation of Visual Odometry using SOFT algorithm
EndoSLAM Dataset and an Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner
This repository intends to enable autonomous drone delivery with the Intel Aero RTF drone and PX4 autopilot. The code can be executed both on the real drone or simulated on a PC using Gazebo. Its core is a robot operating system (ROS) node, which communicates with the PX4 autopilot through mavros. It uses SVO 2.0 for visual odometry, WhyCon for visual marker localization and Ewok for trajectoy planning with collision avoidance.
深度学习和三维视觉相关的论文
ROS 2 wrapper for the ZED SDK
Code for T-ITS paper "Unsupervised Learning of Depth, Optical Flow and Pose with Occlusion from 3D Geometry" and for ICRA paper "Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks".
[ECCV 2022]JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving Scenes
"Visual-Inertial Dataset" (RA-L'21 with ICRA'21): it contains harsh motions for VO/VIO, like pure rotation or fast rotation with various motion types.
Visual odometry using optical flow and neural networks
Implementing different steps to estimate the 3D motion of the camera. Provides as output a plot of the trajectory of the camera.
[ICCV 2021] Official implementation of "The Surprising Effectiveness of Visual Odometry Techniques for Embodied PointGoal Navigation"