kazuto1011 / deeplab-pytorch

PyTorch re-implementation of DeepLab v2 on COCO-Stuff / PASCAL VOC datasets

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU memory usage is very high

Kenneth-X opened this issue · comments

my gpu:Tesla P100-PCIE, 16276MiB Memory

my batch size =2 and gpu id =0,1 (1 image for each gpu) ,input size=(513,513)
when the training is start ,the memory usage is very high(12756MiB / 16276MiB) and drop to normal after a while(4689MiB / 16276MiB)

This makes a very low input size, as i can not input a biger size(1000,1000) beacuse memory usage will blow up at first

what makes this happen? Can it be optimised? how can i solve it

It seems that cuDNN looked for the optimal conv algorithm at the very first iteration (when the input size changes, to be precise). Please try disabling the benchmark mode.

torch.backends.cudnn.benchmark = True

Yeah, it worked
Really help me a lot, thank you for your help