vinthony / ghost-free-shadow-removal

[AAAI 2020] Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN

Home Page:https://arxiv.org/abs/1911.08718

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

OOM When Running Demo on Jupyter Notebook

stevendae opened this issue · comments

When I run the demo using Jupyter Notebook I am able to load the pretrained model by executing the first code block. However when I execute the second I get OOM errors. Is there an explanation for this?

ResourceExhaustedError: OOM when allocating tensor with shape[1,640,840,1475] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node concat_4}} = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](truediv_4, concat_3, concat_10/axis)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

Hi, thanks for your attention.
It seems the size of the input image (along with the feature maps) beyond the limits of memory.

You can try to resize the input image first and then feed it to the network for prediction.

Okay yes I solved it. Is there a reason why it has to be resized? How does the model have a memory limit?

Sure, the model DO NOT have a memory limitation.
This is because when you feed a large image to the network, it will produce larger feature maps in our hyper-parameters features maps(for your case, 1640x840x1475), putting this huge tensor to GPU is not easy and it might larger than your GPU memory and cause OOM.

CPU running might be another choice but a bit slow.