GPU Requirements for Training
jeremy-swack opened this issue · comments
I am attempting to train BA-TFD+ on a custom data set, but I am running into memory issues. I am using a VM with 2 M60 GPUs with 8 GB of memory each, but I am not able to run the training module without crashing, even when setting the batch size to 1. Is there a recommended amount of GPU memory to be able to train the model?
Hi, as mentioned in the paper, we used A100 80GB for training BA-TFD+, which require much more memory than BA-TFD which I used RTX3090 24GB to train.
I think for the minimum requirement, 48GB vRAM is required.