dazinovic / neural-rgbd-surface-reconstruction

Official implementation of the CVPR 2022 Paper "Neural RGB-D Surface Reconstruction"

Home Page:https://dazinovic.github.io/neural-rgbd-surface-reconstruction/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

is it necessary to load everything in memory?

rancheng opened this issue · comments

The whole dataset was loaded to memory for further computation, is this necessary for the upcoming nerf render?

    # Read images and depth maps for which valid poses exist
    for i in train_frame_ids:
        if valid_poses[i]:
            img = imageio.imread(os.path.join(basedir, 'images', img_files[i]))
            depth = imageio.imread(os.path.join(basedir, 'depth_filtered', depth_files[i]))

            images.append(img)
            depth_maps.append(depth)
            poses.append(all_poses[i])
            frame_indices.append(i)

People usually don't have very large memory size if you are loading everything in memory, could you please consider shift to dataloader like in pytorch?

commented

In an early version of my method, I observed better results when optimizing with rays randomly selected over the entire corpus of input images instead of using rays from a single image in each batch. I haven't tested if that's still the case with the full method, but I'd assume it is. There is probably a way to implement random ray selection efficiently without pre-loading everything into memory and shuffling, but at the moment I have no plans of doing that.

thanks for your reply!