hku-mars / mlcc

Fast and Accurate Extrinsic Calibration for Multiple LiDARs and Cameras

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Thank you very much for adding calibration for a single camera and radar. I encountered the following error

af-doom opened this issue · comments

Iteration:0 Distance:30
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!

Iteration:1 Distance:29
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!
[pcl::KdTreeFLANN::setInputCloud] Cannot create a KDTree with an empty input cloud!

my lidar is velodyne16

Hi have you resolve this error?

Hi have you resolve this error?

sorry ,I no,If you succeed, please help me

Hi have you resolve this error?
Sorry This error has been resolved, but the calibration still cannot be completed

Hi @af-doom and @dingkwang, I sincerely apologize for my late reply. For your case, I encourage you to rotate the LiDAR such that it scans the entire surroundings, then you can use the cumulated point cloud (using LiDAR SLAM algorithms e.g. FAST-LIO2) and the image at the first frame to calibrate.

Hi @af-doom and @dingkwang, I sincerely apologize for my late reply. For your case, I encourage you to rotate the LiDAR such that it scans the entire surroundings, then you can use the cumulated point cloud (using LiDAR SLAM algorithms e.g. FAST-LIO2) and the image at the first frame to calibrate.
What is the expected point density of the accumulated point cloud?

Hi @af-doom and @dingkwang, I sincerely apologize for my late reply. For your case, I encourage you to rotate the LiDAR such that it scans the entire surroundings, then you can use the cumulated point cloud (using LiDAR SLAM algorithms e.g. FAST-LIO2) and the image at the first frame to calibrate.
What is the expected point density of the accumulated point cloud?

In our tests, we choose 0.01m or 0.05m to downsample the point cloud. It works well for us.

Hi @af-doom and @dingkwang, I sincerely apologize for my late reply. For your case, I encourage you to rotate the LiDAR such that it scans the entire surroundings, then you can use the cumulated point cloud (using LiDAR SLAM algorithms e.g. FAST-LIO2) and the image at the first frame to calibrate.

@af-doom @dingkwang And if you choose FAST-LIO2 to cumulate the point cloud, please make sure you rest (make it stationary) the sensor suite for a while, because FAST-LIO2 will drop a few scans at the beginning. If you need my help, you can share your point cloud, image, or rosbag files here.