CUDA out of memoey
xuanzhezhao opened this issue · comments
Hi, thanks for your great work! But when I try to run the train/test demo, my GPU memory ran out
(on Tesla V100, 16GB), so I just wonder how should I change the hyperparameters in config file to reduce GPU consumption and still get good performance in the mean while ?
Thanks a lot.
Plus, I can run the test demo successfully while training demo still failed to run due to this CUDA issue.
Update: it works now after I changed 'imgs_per_gpu' and 'workers_per_gpu' to 1 , but also I would greatly appreciate it if you have any other suggestions for this issue. Thanks!
You can reduce either change either the encoder part of the backbone to EfficientNet-b0 or change the random crop resolution to a smaller one or both.
Usually for Cityscapes 12GB is required for 1 image during training