apple / ml-neuman

Official repository of NeuMan: Neural Human Radiance Field from a Single Video (ECCV 2022)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Where do you use mono_depth?

mks0601 opened this issue · comments

Hi, thanks for your amazing work.
I was wondering where do you use depthmaps from mono_depth.
I can't find it in both of your pre-processing files including 1) https://github.com/apple/ml-neuman/blob/main/preprocess/export_alignment.py and 2) https://github.com/apple/ml-neuman/blob/main/preprocess/optimize_smpl.py.
Could you clarify where and how do you use the depthmaps from mono_depth?

The depth values are used for regularizing the background nerf, and you can find it inside the dataloader:

depths_list = [] # MVS/fused depth values

Hi, I backtraced your code, and it seems you're not using mono_depth, but only use depth_maps.

  1. Read depth data with from this code (https://github.com/apple/ml-neuman/blob/0149d258b2afe6ef65c91557bba9f874675871e4/train.py#L37C26-L37C26).
  2. read_data_to_ram can be found in here:
    def read_data_to_ram(self, data_list=['image']):
  3. read_data_to_ram function calls read_image_to_ram and read_depth_to_ram function for each cap, where cap is an instance of this class (
    class NeuManCapture(captures_module.RigRGBDPinholeCapture):
    )
  4. read_image_to_ram first reads mono_depth with this code (
    return self.captured_image.read_image_to_ram() + self.captured_mask.read_image_to_ram() + self.captured_mono_depth.read_depth_to_ram()
    )
  5. But it overwrites the depth data with read_depth_to_ram again from the above 3rd stage, where the read_depth_to_ram calls this function (
    def read_depth(self):
    ).
  6. As captured_depth is defined with depth_path, not mono_depth_path, it overwrites mono_depth data of the above 4th stage with depth data of the 5th stage. FYI, depth_path refers to MVS depth data.

Could you check am I right? Thanks!

Sorry for the over complicated pipeline... It was inherited from a sfm project.
I think what happens to mvs depth and monocular depth is that we used monocular depth to fill the holes in mvs depth maps. see:

@property
def fused_depth_map(self):
if self._fused_depth_map is None:
valid_mask = (self.depth_map > 0) & (self.mask == 0)
x = self.mono_depth_map[valid_mask]
y = self.depth_map[valid_mask]
res = scipy.stats.linregress(x, y)
self._fused_depth_map = self.depth_map.copy()
self._fused_depth_map[~valid_mask] = self.mono_depth_map[~valid_mask] * res.slope + res.intercept
return self._fused_depth_map

Then the fused depth map is used to regularize the bkg nerf:
depths = (cap.fused_depth_map[coords[:, 1], coords[:, 0]]).astype(np.float32)

Great thanks! Now I got it. BTW, are you using other geometry data, such as densepose and keypooints when training NeRF (not for preprocessing)? It seems NeuManCapture loads them but not use them

We didn't use densepose/keypoints during nerf training stage, iirc. You can double check by setting them to None manually.

Awesome. thanks!

Hi jianwei221, two follow-up questions.

  1. Why do you use mono_depth only for background? Why not use it for foreground?
  2. Why do you adjust scale and translation only based on human area like this (
    res = scipy.stats.linregress(x, y)
    )? Why not use bkg area for the adjustment?

Thanks in advance!