princeton-vl / RAFT-3D

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

depth from disparity

christian-rauch opened this issue · comments

The depth computation for the KITTI dataset:

depth1 = DEPTH_SCALE * (intrinsics[0,0] / disp1)
depth2 = DEPTH_SCALE * (intrinsics[0,0] / disp2)

is somehow missing the baseline (0.54 according to http://www.cvlibs.net/datasets/kitti/setup.php).

You can compute depth from disparity by "depth = b * f / disparity". Just using "depth = f / disparity" is ok for the synthetic dataset since the baseline in Blender is set to 1.0. But how is the KITTI baseline incorporated into the KITTI depth computation? Are the disparity images prescaled for a baseline of 1 metre? Or is this somehow part of the DEPTH_SCALE (which is 0.1 here)? Also, why is the disparity taken from the GA-Net and not from the original dataset?

How is this factor chosen for an arbitrary RGBD pair? Any tips?