HKUST-Aerial-Robotics / DenseSurfelMapping

This is the open-source version of ICRA 2019 submission "Real-time Scalable Dense Surfel Mapping"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Get a global consistent point cloud

zhaozhongch opened this issue · comments

Thanks for your work!
I try to use the VIN-supported branch but I met some problems.
The following picture is just used to show what my dataset looks like
2989 023949
There is a table and there is some stuff on that table, the camera will move around the table and keep filming the table. I publish the raw point cloud to rviz. The topic is surfel_fusion/raw_pointcloud as shown in your code。
gif_raw
You can see the desk is moving. But the problem is that the desk should not move. The camera is moving but the scene should standstill.
If I don't publish the raw point cloud and publish the point cloud(pointcloud_noceil) instead I'll end up with a point cloud looks like the following in rviz
gif_postprocess
Sorry maybe you cannot see it very clearly but you focus on the moving part of the point cloud you can still see there is a round table that is moving.
What I hope is the table remain still, the scene may extend as the camera moves. Now the table seems to move along with the camera.

The rosbag's topic is

topics:    /depth         1169 msgs    : sensor_msgs/Image        
             /groundtruth   3003 msgs    : geometry_msgs/PoseStamped
             /image0        1169 msgs    : sensor_msgs/Image             
             /imu0          9069 msgs    : sensor_msgs/Imu

The depth is strictly align with the image0.
I modify your surfel_fusion launch file to (ignore the camera intrinsics setting)

...
    <remap from="~image" to="/image0" />
    <remap from="~depth" to="/depth" />
   <remap from="~loop_path" to="/vins_estimator/path" />
   <remap from="~extrinsic_pose" to="/vins_estimator/extrinsic" />

I play the rosbag, VINS-Mono will subscribe to image0, imu0 to generate the camera's motion. Your package will subscribe to image0, depth directly from rosbag and /vins_estimator/path, /vins_estimator/extrinsic generated by VINS. Correct me if my operation is wrong.
The VINS itself can give me pretty good pose estimation. I have compared it with the ground truth.

Hi,
surfel_fusion/raw_pointcloud is the transformed pointcloud and should be attached to the world frame. In your case, there must be some issues with the system.

In my demos, we use vins-fusion instead of vins-mono. I am not sure that the path in vins-mono is the same to that in vins-fusion. It should be the IMU pose in the world frame.

I cannot debug the system with a few gifs. The pose of the pointcloud is determined by both /vins_estimator/extrinsic and /vins_estimator/path. If you guarantee that the path is correct, I recommend you check the extrinsic estimation to see if it drifts or is wrong (Although it should be estimated well when path is good).

I checked my code. publish_raw_pointcloud(depth, image, fuse_pose) does not guarantee that the messages are synchronized well in your system. I think this is the first thing you should check.
You can show me the log of surfel fusion from your terminal.

Thanks for your reply.
In fact I am using the VINS-FUSION too... I took two screenshots about the log.
I took the following one when the dataset starts
log1
After a while I took the other one
log2
I personally think it shuld be the coordinate system's problem. I publish the path(generated by surfel fusion) and the point cloud(not raw point cloud) together to rviz and get the following screenshots
path1
path2
path3
You can clearly see that when the path is a half-circle the point cloud is a half-circle too and when the path finishes a loop the pointcloud also finishes a loop. It really feels likes the pointcloud just show "how the camera moves" instead of showing what the scene looks like.
I double-check the /vins_estimator/path topic I published to your system, it is in visualization.cpp file of VINS-FUSION and it publishes the estimator.Ps, estimator.Rs, which is the pose Position of sensor (Ps) and rotation of sensor (Rs) and the frame name is "world" so I can make sure it is the IMU pose in world frame.
I also try to publish the /loop_fusion/pose_graph_path(in VINS loop fusion package, pose_graph.cpp) to replace the /vins_estimator/path. The result(path generated by surfel fusion and point cloud) is similar. My dataset is not very large so it doesn't have a big difference whether I use loop closure of VINS or not.
Em... So in short I think it should be the coordinate system's problem but the coordinate I publish to your system is correct. Really confusing.
Thanks again for taking time reading this!

Well...
I can't figure out any other reasons since the messages are synchronized successfully.
Let's make sure that:
1, you said the VINS pose is correct, right?
2, is driftfree_loop_path from SufelMapping correct? and is consistent with that of VINS-fusion? (driftfree_loop_path publishes the camera pose in the world frame).
3, have you checked the intrinsic parameter in your launch file?

Hi KX
You are right. The drift free loop path's scale is different with vins path. It reminds me that it should be the scaling factor's problem. I checked your scaling factor is 0.001 while my dataset is 0.0002. I correct that and get the result I want.
correct_result

@zhaozhongch Excuse me.What camera did you use to record rosbag?

@yueshukun I didn't use camera to record the bag. The dataset is from here https://www.eth3d.net/slam_datasets
It should be table_3 or table_4
I write a script to convert the original dataset to rosbag.

@zhaozhongch I got it.Thanks.