SJTU-ViSYS / M2DGR

M2DGR: a Multi-modal and Multi-scenario Dataset for Ground Robots(RA-L2021 & ICRA2022)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

M2DGR: a Multi-modal and Multi-scenario SLAM Dataset for Ground Robots (RA-L & ICRA2022)

First Author: Jie Yin 殷杰   📝 [Paper]   ➡️ [Dataset Extension]   ⭐️[Presentation Video]

Figure 1. Sample Images

🎯 Notice

We strongly recommend that the newly proposed SLAM algorithm be tested on our M2DGR / M2DGR-plus / Ground-Challenge benchmark, because our data has following features:

  1. Rich sensory information including vision, lidar, IMU, GNSS,event, thermal-infrared images and so on
  2. Various scenarios in real-world environments including lifts, streets, rooms, halls and so on.
  3. Our dataset brings great challenge to existing cutting-edge SLAM algorithms including LIO-SAM and ORB-SLAM3. If your proposed algorihm outperforms these SOTA systems on our benchmark, your paper will be much more convincing and valuable.
  4. 🔥 Extensive excellent open-source projects have been built or evaluated on M2DGR/M2DGE-plus so far, for examples, Ground-Fusion, LVI-SAM-Easyused, Log-LIO, Swarm-SLAM, VoxelMap++, GRIL-Cali, LINK3d, i-Octree, LIO-EKF, Fast-LIO ROS2, HC-LIO, LIO-RF, PIN-SLAM, LOG-LIO2, Section-LIOand so on!

Table of Contents

  1. 💎 News & Updates
  2. Introduction
  3. License
  4. Sensor Setup
  5. ⭐️ Dataset Sequences
  6. 📝 Configuration Files
  7. Development Toolkits
  8. Acknowledgement

Tip

Check the table of contents above for a quick overview. And check the below news for lateset updates, especially the list of projects based on M2DGR.

News & Updates

>LVI-SAM on M2DGR

  • ⭐️2022/02/18: We have upload a brand new SLAM dataset with GNSS, vision and IMU information. Here is our link SJTU-GVI. Different from M2DGR, new data is captured on a real car and it records GNSS raw measurements with a Ublox ZED-F9P device to facilitate GNSS-SLAM. Give us a star and folk the project if you like it.

  • 📄 2022/02/01: The paper has been accepted by both RA-L and ICRA 2022. The paper is provided in Arxiv version and IEEE RA-L version.

Note

If you build your open-source project based on M2DGR or test a cutting-edge SLAM system on M2DGR, please write a issue to remind me of updating your contributions.

INTRODUCTION

ABSTRACT:

We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public.

Keywords:Dataset, Multi-model, Multi-scenario,Ground Robot

MAIN CONTRIBUTIONS:

  • We collected long-term challenging sequences for ground robots both indoors and outdoors with a complete sensor suite, which includes six surround-view fish-eye cameras, a sky-pointing fish-eye camera, a perspective color camera, an event camera, an infrared camera, a 32-beam LIDAR, two GNSS receivers, and two IMUs. To our knowledge, this is the first SLAM dataset focusing on ground robot navigation with such rich sensory information.
  • We recorded trajectories in a few challenging scenarios like lifts, complete darkness, which can easily fail existing localization solutions. These situations are commonly faced in ground robot applications, while they are seldom discussed in previous datasets.
  • We launched a comprehensive benchmark for ground robot navigation. On this benchmark, we evaluated existing state-of-the-art SLAM algorithms of various designs and analyzed their characteristics and defects individually.

VIDEO

ICRA2022 Presentation

For Chinese users, try bilibili

LICENSE

This work is licensed under MIT license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on 1195391308@qq.com for further communication.

If you face any problem when using this dataset, feel free to propose an issue. And if you find our dataset helpful in your research, simply give this project a star. If you use M2DGR in an academic work, please cite:

@article{yin2021m2dgr,
  title={M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots},
  author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
  journal={IEEE Robotics and Automation Letters},
  volume={7},
  number={2},
  pages={2266--2273},
  year={2021},
  publisher={IEEE}
}
@article{yin2024ground,
  title={Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases},
  author={Yin, Jie and Li, Ang and Xi, Wei and Yu, Wenxian and Zou, Danping},
  journal={arXiv preprint arXiv:2402.14308},
  year={2024}
}

SENSOR SETUP

Acquisition Platform

Physical drawings and schematics of the ground robot is given below. The unit of the figures is centimeter.

Figure 2. The GAEA Ground Robot Equipped with a Full Sensor Suite.The directions of the sensors are marked in different colors,red for X,green for Y and blue for Z.

Sensor parameters

All the sensors and track devices and their most important parameters are listed as below:

  • LIDAR Velodyne VLP-32C, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,10Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2°.

  • RGB Camera FLIR Pointgrey CM3-U3-13Y3C-CS,fish-eye lens,1280*1024,190 HFOV,190 V-FOV, 15 Hz

  • GNSS Ublox M8T, GPS/BeiDou, 1Hz

  • Infrared Camera,PLUG 617,640*512,90.2 H-FOV,70.6 V-FOV,25Hz;

  • V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz

  • Event Camera Inivation DVXplorer, 640*480,15Hz;

  • IMU,Handsfree A9,9-axis,150Hz;

  • GNSS-IMU Xsens Mti 680G. GNSS-RTK,localization precision 2cm,100Hz;IMU 9-axis,100 Hz;

  • Laser Scanner Leica MS60, localization 1mm+1.5ppm

  • Motion-capture System Vicon Vero 2.2, localization accuracy 1mm, 50 Hz;

The rostopics of our rosbag sequences are listed as follows:

  • LIDAR: /velodyne_points

  • RGB Camera: /camera/left/image_raw/compressed ,
    /camera/right/image_raw/compressed ,
    /camera/third/image_raw/compressed ,
    /camera/fourth/image_raw/compressed ,
    /camera/fifth/image_raw/compressed ,
    /camera/sixth/image_raw/compressed ,
    /camera/head/image_raw/compressed

  • GNSS Ublox M8T:
    /ublox/aidalm ,
    /ublox/aideph ,
    /ublox/fix ,
    /ublox/fix_velocity ,
    /ublox/monhw ,
    /ublox/navclock ,
    /ublox/navpvt ,
    /ublox/navsat ,
    /ublox/navstatus ,
    /ublox/rxmraw

  • Infrared Camera:/thermal_image_raw

  • V-I Sensor:
    /camera/color/image_raw/compressed ,
    /camera/imu

  • Event Camera:
    /dvs/events,
    /dvs_rendering/compressed

  • IMU: /handsfree/imu

DATASET SEQUENCES

We make public ALL THE SEQUENCES with their GT now.

Figure 3. A sample video with fish-eye image(both forward-looking and sky-pointing),perspective image,thermal-infrared image,event image and lidar odometry

An overview of M2DGR is given in the table below:

Scenario Street Circle Gate Walk Hall Door Lift Room Roomdark TOTAL
Number 10 2 3 1 5 2 4 3 6 36
Size/GB 590.7 50.6 65.9 21.5 117.4 46.0 112.1 45.3 171.1 1220.6
Duration/s 7958 478 782 291 1226 588 1224 275 866 13688
Dist/m 7727.72 618.03 248.40 263.17 845.15 200.14 266.27 144.13 395.66 10708.67
Ground Truth RTK/INS RTK/INS RTK/INS RTK/INS Leica Leica Leica Mocap Mocap ---

Outdoors

Figure 4. Outdoor Sequences:all trajectories are mapped in different colors.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
gate_01 2021-07-31 16.4g 172s dark,around gate Rosbag GT
gate_02 2021-07-31 27.3g 327s dark,loop back Rosbag GT
gate_03 2021-08-04 21.9g 283s day Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
Circle_01 2021-08-03 23.3g 234s Circle Rosbag GT
Circle_02 2021-08-07 27.3g 244s Circle Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
street_01 2021-08-06 75.8g 1028s street and buildings,night,zigzag,long-term Rosbag GT
street_02 2021-08-03 83.2g 1227s day,long-term Rosbag GT
street_03 2021-08-06 21.3g 354s night,back and fourth,full speed Rosbag GT
street_04 2021-08-03 48.7g 858s night,around lawn,loop back Rosbag GT
street_05 2021-08-04 27.4g 469s night,staight line Rosbag GT
street_06 2021-08-04 35.0g 494s night,one turn Rosbag GT
street_07 2021-08-06 77.2g 929s dawn,zigzag,sharp turns Rosbag GT
street_08 2021-08-06 31.2g 491s night,loop back,zigzag Rosbag GT
street_09 2021-08-07 83.2g 907s day,zigzag Rosbag GT
street_010 2021-08-07 86.2g 910s day,zigzag Rosbag GT
walk_01 2021-08-04 21.5g 291s day,back and fourth Rosbag GT

Indoors

Figure 5. Lift Sequences:The robot hang around a hall on the first floor and then went to the second floor by lift.A laser scanner track the trajectory outside the lift

Sequence Name Collection Date Total Size Duration Features Rosbag GT
lift_01 2021-08-04 18.4g 225s lift Rosbag GT
lift_02 2021-08-04 43.6g 488s lift Rosbag GT
lift_03 2021-08-15 22.3g 252s lift Rosbag GT
lift_04 2021-08-15 27.8g 299s lift Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
hall_01 2021-08-01 29.1g 351s randon walk Rosbag GT
hall_02 2021-08-08 15.0g 128s randon walk Rosbag GT
hall_03 2021-08-08 20.5g 164s randon walk Rosbag GT
hall_04 2021-08-15 17.7g 181s randon walk Rosbag GT
hall_05 2021-08-15 35.1g 402s circle Rosbag GT

Figure 6. Room Sequences:under a Motion-capture system with twelve cameras.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
room_01 2021-07-30 14.0g 72s room,bright Rosbag GT
room_02 2021-07-30 15.2g 75s room,bright Rosbag GT
room_03 2021-07-30 26.1g 128s room,bright Rosbag GT
room_dark_01 2021-07-30 20.2g 111s room,dark Rosbag GT
room_dark_02 2021-07-30 30.3g 165s room,dark Rosbag GT
room_dark_03 2021-07-30 22.7g 116s room,dark Rosbag GT
room_dark_04 2021-08-15 29.3g 143s room,dark Rosbag GT
room_dark_05 2021-08-15 33.0g 159s room,dark Rosbag GT
room_dark_06 2021-08-15 35.6g 172s room,dark Rosbag GT

Alternative indoors and outdoors

Figure 7. Door Sequences:A laser scanner track the robot through a door from indoors to outdoors.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
door_01 2021-08-04 35.5g 461s outdoor to indoor to outdoor,long-term Rosbag GT
door_02 2021-08-04 10.5g 127s outdoor to indoor,short-term Rosbag GT

CONFIGURATION FILES

For convenience of evaluation, we provide configuration files of some well-known SLAM systems as below:

A-LOAM, LeGO-LOAM, LINS, LIO-SAM, VINS-MONO, ORB-Pinhole, ORB-Fisheye, ORB-Thermal, and CUBMAPSLAM.

Furthermore, a quantity of cutting-edge SLAM systems have been tested on M2DGR by lovely users. Here are the configuration files for ORB-SLAM2, ORB-SLAM3, VINS-Mono,DM-VIO, A-LOAM, Lego-LOAM, LIO-SAM, LVI-SAM, LINS, FastLIO2, Fast-LIVO, Faster-LIO and hdl_graph_slam. Welcome to test! If you have more configuration files, please contact me and I will post it on this website ~

DEVELOPEMENT TOOLKIT

Extracting Images

  • For rosbag users, first make image view
roscd image_view
rosmake image_view
sudo apt-get install mjpegtools

open a terminal,type roscore.And then open another,type

rosrun image_transport republish compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw respawn="true"

Evaluation

We use open-source tool evo for evalutation. To install evo,type

pip install evo --upgrade --no-binary evo

To evaluate monocular visual SLAM,type

evo_ape tum street_07.txt your_result.txt -vaps

To evaluate LIDAR SLAM,type

evo_ape tum street_07.txt your_result.txt -vap

To test GNSS based methods,type

evo_ape tum street_07.txt your_result.txt -vp

Calibration

For camera intrinsics,visit Ocamcalib for omnidirectional model. visit Vins-Fusion for pinhole and MEI model. use Opencv for Kannala Brandt model

For IMU intrinsics,visit Imu_utils

For extrinsics between cameras and IMU,visit Kalibr For extrinsics between Lidar and IMU,visit Lidar_IMU_Calib For extrinsics between cameras and Lidar, visit Autoware

Getting RINEX files

For GNSS based methods like RTKLIB, we usually need to get data in the format of RINEX. To make use of GNSS raw measurements, we use Link toolkit.

ROS drivers for UVC cameras

We write a ROS driver for UVC cameras to record our thermal-infrared image. UVC ROS driver

ACKNOWLEGEMENT

This work is supported by NSFC(62073214). Authors from SJTU hereby express our appreciation.

About

M2DGR: a Multi-modal and Multi-scenario Dataset for Ground Robots(RA-L2021 & ICRA2022)

License:MIT License