depth from disparity
christian-rauch opened this issue · comments
Christian Rauch commented
The depth computation for the KITTI dataset:
RAFT-3D/scripts/kitti_submission.py
Lines 74 to 75 in 877eb80
is somehow missing the baseline (0.54 according to http://www.cvlibs.net/datasets/kitti/setup.php).
You can compute depth from disparity by "depth = b * f / disparity". Just using "depth = f / disparity" is ok for the synthetic dataset since the baseline in Blender is set to 1.0. But how is the KITTI baseline incorporated into the KITTI depth computation? Are the disparity images prescaled for a baseline of 1 metre? Or is this somehow part of the DEPTH_SCALE
(which is 0.1 here)? Also, why is the disparity taken from the GA-Net and not from the original dataset?
Tonmoy Saikia commented
How is this factor chosen for an arbitrary RGBD pair? Any tips?