fatPeter / mini-splatting

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

render_depthCUDA函数中深度计算是不是有点问题?

lingxin12 opened this issue · comments

image

都已经使用归一化的方向向量进行计算了,为啥还要除以方向的长度(图片中605行)

ps:虽然也没有用到depth

您好,感谢您对该工作的关注。

  1. ray_direction的定义是,在相机空间下沿z轴方向长度为1的光束。在3D vision中,我们通常将深度定义为,沿z轴方向,从相机中心到物体的距离。
  2. normalized_ray_direction的定义是归一化后的光束的方向,即每条光线长度为1。我们可以直接使用normalized_ray_direction获得重建后的点云:
    point_rec = ray_origin+(-b/2/a)*normalized_ray_direction
    但是,对于depth,-b/2/a仅表示一组需要后续处理的伪深度。

考虑到可能涉及部分专有名词,另有英文回复:
Thank you for your interests to our work.

  1. The definition of ray_direction is the directions of generated ray bundle. In the camera space, the length along z axis of direction of ray bundle is set to 1. In 3D vision, we define depth as the distance from the camera center to the object along the z/depth axis.
  2. The definition of normalized_ray_direction is the directions of normalized ray bundle whose length is 1. Thus, it is feasible to obtain reconstructed points as:
    point_rec = ray_origin+(-b/2/a)*normalized_ray_direction
    However, for depth, -b/2/a only represented the pseudo depth with a normalized factor.

By the way, I also note that some NeRF implementations adopt normalized ray bundle to obtain pseudo depth. This is ok only because they use depth for the purpose of visualization.

@fatPeter Thank you!