PRBonn / LiDAR-MOS

(LMNet) Moving Object Segmentation in 3D LiDAR Data: A Learning-based Approach Exploiting Sequential Data (RAL/IROS 2021)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About FlowNet3D in the paper

chenxingxin-star opened this issue · comments

Hi, thanks for your great work!
I have a question about FlowNet3D used in your paper. FlowNet3D estimates the motion of 3D point cloud, which is generated by object motion and **sensor motion**,  so why you can identify dynamic objects by a threshold? As you described in the paper:"We set a threshold on the estimated translation of each point to decide the label for each point, i.e., points with translations larger than the threshold are labeled as moving".
In my opinion, large translation of sence flow doesn't mean dynamic objects, maybe just the lidar sensor is moving.  Do you firstly transform the two frames to the same coordinate, and then estimate the sence flow?

Thanks for your interest and question.

Yes, the correct way is to first estimate the ego-motion of our car using odometry/SLAM and then check the absolute translation after compensating the ego-motion, as described in the related sceneflow papers. Since we focus more on LiDAR-MOS and have a limited page size, we can not give more details.