run_downstream_custom_multiple_fold.py CUDA out of memory
zxpan opened this issue · comments
zxpan@anoki.tv commented
Got the following when running run_downstream_custom_multiple_fold.py
RuntimeError: CUDA out of memory. Tried to allocate 730.00 MiB (GPU 0; 23.70 GiB total capacity; 21.65 GiB already allocated; 426.81 MiB free; 21.81 GiB reserved in total by PyTorch)
I have NVIDIA GeForce RTX 3090 with 24GB.
Any insights on how to workaround it?
Juyeon Kim commented
me tooo...... I think we have to use multi-gpu
liuhaozhe6788 commented
@zxpan You can reduce the batch size from 64 to 32.