GPU memory
acsignal opened this issue · comments
I am using the Python 3.6 (branch 1.0) of your code with CUDA 10 and everything seems to be working up until the point when I try to run step 3 (training), I am using a GTX 2080 Ti with 11GB frame buffer so I was under the impression that it would have enough memory to train the model however I get this error:
RuntimeError: CUDA out of memory. Tried to allocate 392.00 MiB (GPU 0; 11.00 GiB total capacity; 7.75 GiB already allocated; 36.24 MiB free; 661.04 MiB cached)
Is there any way for me to clear the cached memory to make way for the allocation or is that a bad idea?
Also if it's the case that I simply don't have enough memory to run the training when will the lightweight (mono) model be released for python 3.6?
hi , i use the same version and hardware device and got the same issue , how did you fix it in the end?