dazinovic / neural-rgbd-surface-reconstruction

Official implementation of the CVPR 2022 Paper "Neural RGB-D Surface Reconstruction"

Home Page:https://dazinovic.github.io/neural-rgbd-surface-reconstruction/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Memory needed to train a scene

Leviosaaaa opened this issue · comments

Thank you for sharing this cool project!

I am trying to train a model on the BlendSwap dataset as you did. The GPU I am using has about 10GB memory but the training was killed when loading all the data. I tried to decrease N_rand and chunk but they did not seem to have an effect on how much GPU memory is requested for training. So, my question is:

  1. How much memory is needed for training BlendSwap scenes?
  2. Which config parameters should I adjust if I want to decease the GPU memory usage? Or is it currently impossible to decease GPU memory usage since the training code always loads all the RGB-D data before starting training?

Thanks a lot!

commented

If the the script crashes during dataloading, then you probably have insufficient RAM. The code loads all data into memory first, which might require ~50GB of RAM for some scenes. One solution is to re-write the dataloader, so that the data is loaded dynamically during optimization.

Thanks a lot for your help! I checked with watch free -h while training. Indeed, RAM went out of memory during dataloading. I probably have to either use a computer with bigger RAM or re-write dataloader with Tensorflow dataloading function, which should tackle this problem automatically.