rabbityl / lepard

[CVPR 2022, Oral] Learning Partial point cloud matching in Rigid and Deformable scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

4DMatch data and deformation graph generation.

qinzheng93 opened this issue · comments

Hi Yang,

I notice the names of data files in 4DMatch is "camA_xxxx_camB_xxxx.npz". However, there is only one sequence for each shape in DeformingThings4D. So I am wondering what's the meaning of "cam1" and "cam2"? I guess "cam1" is the original camera in DeformingThings4D and "cam2" is generated with a random rigid transformation (fixed for the same shape), is this right?

And I see in the supplementary material the deformation graph is generated according to the geodesic distance computed from the depth images, how can I find the implementation details for this?

Thanks a lot.

Hi

Exactly, we sample 2 cameras for each animation to mimic camera movements. Given "camA_xxxx_camB_xxxx.npz", camA is the source and camB is the target. XXXX indicate the frame ID.

I upload a N-ICP implementation here, where you can also found the graph implementation. It use an Adam solver, which is different from the Lepard paper (Gauss-Newton solver). Adam is slower but should be easier to use.

Here's the raw depth images [4.5GB google drive] for 4DMatch.
Given the sequence ID and data file name, you should be able to trace the raw depth maps for 4DMatch data.