dataloader memory leak issue
wuqianliang opened this issue · comments
I have never seen this on my machine.
It's possibly caused by memory size.
Maybe you can try to use a smaller num_worker, say 1, 2 or even 0.
also see this: pytorch/pytorch#8976 (comment)
Hi, I meet the same problem, did you slove it?
I find that the problem is caused by AverageMeter.update() in a training step. I solve the problem by detach the input tensor during the accumulation in AverageMeter.update().