zju3dv / NeuralRecon-W

Code for "Neural 3D Reconstruction in the Wild", SIGGRAPH 2022 (Conference Proceedings)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Too long training time (36 hours for a single epoch on Phototourism dataset using NVIDIA V100 single GPU of 32GB)

purplebutterfly79 opened this issue · comments

For training on scenes of the phototourism dataset (Pantheon exterior without image downscaling) using an NVIDIA Tesla V100 GPU of 32 GB memory, it is taking me 36 hours for a single epoch. Considering the default number of 20 epochs, it would take as long as 30 days for full training.

Could you please provide the exact specification of the GPU's used in your experiments? In the paper it is mentioned that 8 NVIDIA A100 GPU's are used. How much GPU memory it has? Does it have 80GB per GPU?

Hi, the GPU we used for experiments is 8 NVIDIA A100 GPU with 40GB GPU memory. However, you do not need to run all epochs, following the training time specified in the paper is sufficient.