alievk / npbg

Neural Point-Based Graphics

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU required for training?

flaime-ai opened this issue · comments

I am trying to train a new scene and am running into memory issues with training.

I have tried with a single Titan RTX (24GB) card, and a multi GPU setup (4 x Tesla T4, each 16GB)
With both I am receiving a CUDA out of memory error on the first epoch of training.

I was wondering which GPU set ups you used for training? As I would assume these should be sufficiently big to train the model.

We used GeForce GTX 1080 Ti with less than 12 GB GPU memory, which should be more than enough for training in most cases. Not sure where your issue comes from; perhaps you can try decreasing batch size if it's more than 1 in your experiment (--batch_size parameter of train.py). Does it happen during running the example from the readme?

Thanks - Okay, there must be something else up.
I can get both the examples running with no problems.
I'll try decreasing the batch size.
Also, it may be to do with the point cloud generated from agisoft, I didn't optimise that.