Depth map computation
VladimirYugay opened this issue · comments
Hey there,
For the ASE data, it is mentioned here that the depth maps are depth values along pixel ray direction. First, I rectified color and depth images as described here. Now how can I compute the "normal" depth map? Is it fair to assume that I can use this formula:
Also, adjusting the calibration after rotating the image to the right gives me an error: AttributeError: module 'projectaria_tools.core.calibration' has no attribute 'rotate_camera_calib_cw90deg'
if I use this script). Is there a function to do that in the toolbox?
In order to use the rotate_camera_calib_cw90deg
you need a pre-release version of our pypi archive. You can access it by using pip install projectaria-tools==1.5.1a1 --upgrade
Regarding the normal depth map I will let @suvampatra add comments
Thanks for such a quick response. Can confirm that the formula mentioned above works for getting a "real" depth.
In case somebody would need it:
def _calculate_real_depth(self, depth_data: np.ndarray) -> np.ndarray:
X, Y = np.meshgrid(np.arange(self.width), np.arange(self.height))
term_x = ((X - self.cx) / self.fx) ** 2
term_y = ((Y - self.cy) / self.fy) ** 2
return depth_data / np.sqrt(1 + term_x + term_y)
On another note, what is the coordinate system of the ASE data?
Sorry for the delayed response. Thanks @SeaOtocinclus for clarifying some of the doubts.
As for the range map -> depth map conversion, I see you have already figured it out and thanks for sharing it with us.
As for the coordinate systems for the data please have a look at https://facebookresearch.github.io/projectaria_tools/docs/data_formats/coordinate_convention/3d_coordinate_frame_convention. This should give you a fair understanding of the coordinate system of the data.
I think the questions have been answered, and I don't see any new questions. Please feel free to reopen it, if you have more questions