Memory leak
kulich-d opened this issue · comments
Hi!
You have a memory leak during training here
It appends because
print(correct_num) -> <add_backward>
For solving this problem, I used .detach()
:
loss = loss.detach().cpu()
_, predicts = torch.max(output.detach().cpu(), 1)
correct_num = torch.eq(predicts.detach().cpu(), labels.detach().cpu()).sum()
And memory stoped leak.
I attach a memory profile file.
Hi, what is the module name for getting profile file? Thanks.