chr5tphr / zennit

Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Not removing backward hook handler *might* cause problems?

rachtibat opened this issue · comments

Hi Chris,

Thank you for your support.
I noticed in the Hook source code,
that you register a backward hook, but you don't save its handler.

As far as I understand, the hook is destroyed when the tensor is destroyed.
But if the tensor stays alive for whatever reason, then the hook will persist. This might cause a problem?
What do you think?

Best

Hey Reduan,

I am aware of this and think this is not really a problem, since the tensor you get with each pass is a new instance, and will be garbage collected anyway as soon as all references are gone (remember, the hook is registered to the tensor in the forward pass, which is also when a new tensor instance is created each time).
Even if the tensor stays around, the hook is bound to its specific backward graph, which is actually expected to not change.
From removing the handle you would gain the ability to get the gradient instead when doing another backward pass with different grad_outputs when the graph is retained (as you are doing in your conditional lrp code).
I have been thinking how to control the tensor handles before, since this is needed for computing the gradient with respect to the modified gradient, without triggering the hook.
But I think removing the hook before computing the second order gradient is not the best approach, since then you lose the ability to retain the graph for computing the modified gradients on the same graph.
I thought about implement something like a context or a switch to temporarily deactivate the hook (like with a condition within the hook) without removing it in the future.