RuanJY / SLAMesh

ICRA2023, A real-time LiDAR simultaneous localization and meshing method.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

KITTI with Ground Truth Poses

hungdche opened this issue · comments

Hi! Thank you for such an awesome work.

I saw that you have some parameters grt_available and GroundTruthCallback, but didn't see any interface or any mentions in the README for feeding ground truth poses to SLAMesh to test only the meshing part. I'm wondering if I have to feed gt poses through rosbag play, or load it like how you load lidar points for KITTI.

Any help is much appreciated. Thanks!

Hi, Thank you for your interest. Sorry for this late reply.

Currently, the ground truth msg is only used to align the first pose of SLAM with it so that we can visualize the drift of SLAM online. We save the pose of the ground truth msg for offline evaluation.

I understand your demand. If you want to do so, I think you may try to use the ground truth msg as the odometry msg (remap the topic, set odom_available as true and grt_available as false), and then set the parameter register_times as 0 to turn off the scan registration.

Thanks for the reply. I will try that and get back to you with more questions if I have any.

Hi, thank you so much for providing this perfect program.
I am trying to use my own vertical radar scan data for model reconstruction, which will get a cleaner mesh. But the matching process always shifts. I referred the solution from issue #15. Then I tried the pose data obtained from the horizontal radar of the same machine to correct it.
It runs without errors but never draws the mesh. The result of the run looks like this
[ INFO] [1709195543.923199405]: PointCloud seq: [3428]
[ INFO] [1709195543.973204881]: PointCloud seq: [3429]
[ INFO] [1709195544.024206786]: PointCloud seq: [3430]
[ INFO] [1709195544.074742817]: PointCloud seq: [3431]
Hopefully I can get some luck with your help.

Sorry, my bad! It's lidar from backpack lidar scanner. Here is my data frame.
image

Hi, thank you so much for providing this perfect program. I am trying to use my own vertical radar scan data for model reconstruction, which will get a cleaner mesh. But the matching process always shifts. I referred the solution from issue #15. Then I tried the pose data obtained from the horizontal radar of the same machine to correct it. It runs without errors but never draws the mesh. The result of the run looks like this [ INFO] [1709195543.923199405]: PointCloud seq: [3428] [ INFO] [1709195543.973204881]: PointCloud seq: [3429] [ INFO] [1709195544.024206786]: PointCloud seq: [3430] [ INFO] [1709195544.074742817]: PointCloud seq: [3431] Hopefully I can get some luck with your help.

Sorry for the delayed response; If you see [ INFO] [1709195543.923199405]: PointCloud seq: [3428] repeatedly, the SLAM doesn't start. It is the info from the callback function:

ROS_INFO("PointCloud seq: [%d]", pcl_msg->header.seq);

I think maybe your odometry topic is not set correctly and the algorithm is waiting for it. You can uncomment this line and see if you can receive the print from odometry callback function:

//ROS_INFO("Odometry seq: [%d]", odom_msg->header.seq);

You should remap the odometry topic in the launch file like:

 <remap from="/odom" to="/your_slam_odometry" />

By the way, I have made some changes to the code. Would you please test the new code again using vertical lidar directly and check if it still drift?