https://github.com/qixuxiang/orb-slam2_with_semantic_labelling
There are so many large files in .git folder and I hate them, so I move the code toorb-slam2_with_semantic_label
Authors: Xuxiang Qi(qixuxiang16@nudt.edu.cn),Shaowu Yang(shaowu.yang@nudt.edu.cn),Yuejin Yan(nudtyyj@nudt.edu.cn)
Current version: 1.0.0
0.introduction
orb-slam2_with_semantic_label is a visual SLAM system based on ORB_SLAM2[1-2]. The ORB-SLAM2 is a great visual SLAM method that has been popularly applied in robot applications. However, this method cannot provide semantic information in environmental mapping.In this work,we present a method to build a 3D dense semantic map,which utilize both 2D image labels from YOLOv3[3] and 3D geometric information.
1. Related Publications
Deep Learning Based Semantic Labelling of 3D Point Cloud in Visual SLAM
2. Prerequisites
2.1 requirements
- Ubuntu 14.04/Ubuntu 16.04/Ubuntu 18.04
- ORB-SLAM2
- CUDA 8(must, CUDA9/10 will cause segmentation fault)
- C++11(must)
- GCC >= 5.0
- cmake
- OpenCV2 or OpenCV3, may not work with OpenCV4
- PCL1.7 or PCL1.8, may not work with PCL1.9
2.2 Installation
Refer to the corresponding original repositories (ORB_SLAM2 and YOLO for installation tutorial).
2.3 Build
git clone https://github.com/qixuxiang/orb-slam2_with_semantic_label.git
sh build.sh
3. Run the code
-
Download
yolov3.weights
,yolov3.cfg
andcoco.names
from darknet and put them inbin
folder. Also, these files can be found in YOLO V3.Then, you should make a dir namedimg
inbin
folder, that is, you should execute commandsudo mkdir img
inbin
folder. -
Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it to
data
folder. -
Associate RGB images and depth images using the python script associate.py. We already provide associations for some of the sequences in
Examples/RGB-D/associations/
. You can generate your own associations file executing:
python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
- Change
TUMX.yaml
to TUM1.yaml,TUM2.yaml or TUM3.yaml for freiburg1, freiburg2 and freiburg3 sequences respectively. ChangePATH_TO_SEQUENCE_FOLDER
to the uncompressed sequence folder.You can run the project by:
cd bin
./rgbd_tum ../Vocabulary/ORBvoc.txt ../Examples/RGB-D/TUM2.yaml ../data/rgbd-data ../data/rgbd-data/associations.txt
Reference
[1] Mur-Artal R, Montiel J M M, Tardos J D. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163.
[2] Mur-Artal R, Tardos J D. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras[J]. arXiv preprint arXiv:1610.06475, 2016.
[3] Redmon, Joseph, and A. Farhadi. "YOLOv3: An Incremental Improvement." (2018).
License
Our system is released under a GPLv3 license.
If you want to use code for commercial purposes, please contact the authors.
Other issue
-
We do not test the code there on ROS bridge/node.The system relies on an extremely fast and tight coupling between the mapping and tracking on the GPU, which I don't believe ROS supports natively in terms of message passing.
-
I only test the code on OpenCV2 + CDUA8 + CUDNN7 + PCL1.8, and CUDA9/10 will cause segmentation fault.
-
Welcome to submit any issue if you have problems, and add your software and computer system information details, such as Ubuntu 16/14,OpenCV 2/3, CUDA 9.0, GCC5.4,etc..
-
We provide a video here.