RuntimeError: CUDA out of memory during running of val.py
MFarooqAit opened this issue · comments
Dear author,
Thank you so much for sharing a very useful code.
i found the following error during running your val.py file, using your testing dataset with provided pretrained model:
Runtime Error: CUDA out of memory. Tried to allocate 64.00 GiB (GPU 0; 10.92 GiB total capacity; 240.92 MiB already allocated; 10.11 GiB free; 43.08 MiB cached)
My system has pytorch 1.0 with CUDA Version: 10.0
I tried to find its solution. Generally solutions are: reduce batch size, reduce the size of input images, or use latest stable version of pytorch. For testing batch size is already 1, I reduced the size of input images upto 64x64. and also install new version of pytorch in anaconda environment with "conda install pytorch torchvision cudatoolkit=10.0 -c pytorch". But i am unable to fix this problem.
How can I fix this issue?
Kindly help. Thank you!
Same issue here.
Dear Boris,
Resize your dataset to 256x256.
Thanks for your attention. Crop the image to 256 * 256 first.
Close for now.
RuntimeError: CUDA out of memory. Tried to allocate 2.00 GiB (GPU 0; 10.00 GiB total capacity; 997.78 MiB already allocated; 6.93 GiB free; 12.22 MiB cached)
您好 我遇到了相同的问题,您有解决之法吗?我用python ./train.py --save_epoch_freq 1 --angle 15 --dataroot ./LEVIR-CD/train --val_dataroot ./LEVIR-CD/val --name LEVIR-
CDFA0 --lr 0.001 --model CDFA --SA_mode BAM --batch_size 8 --load_size 256 --crop_size 256 --preprocess rotate_and_crop
裁剪了仍然不行。