cmusatyalab / mega-nerf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

train time of the 25 model

jiabinliang opened this issue · comments

Hello could you report the training time of the Larger models (trained with 25 submodules with 512 channels each)?
it would be great if we can compare with your inspiring work

From memory training a submodule with 512 instead of 256 channels would increase training time somewhere between 2-4x (it should be possible to reproduce the exact slowdown factor with this repo). And then the overall train time is a question of many GPUs you have available. Using 25 submodules instead of 8 will increase the total GPU hours accordingly, but this is parallelizable so it really just depends if you have 25 GPUs that would otherwise be idle or less (in which case your overall train time will be accordingly slower).