Yukariin / SAN_pytorch

Second-order Attention Network for Single Image Super-resolution (CVPR-2019)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA out of memory

CR7forMadrid opened this issue · comments

Why CUDA out of memory .Tried to allocate 8.38GiB (GPU 0; 10.92 GiB total capacity; 8.69 GiB already allocated; 1.22GiB free;33.00 MiB cached)
Why I set batch_size=1,but still CUDA out of memory.what is the reason

What are you trying to do exactly?
Your 11GB GPU should be fine for inference (using test.py), even with batch_size=16
As for training, I used batch_size=16 with patch size of 48x48 on T4