Pytorch Warning -- Non-Full Backward Hook when the Forward Contains Multiple Autograd Nodes
a-cowlagi opened this issue · comments
I get the following user warning from PyTorch when I try to instantiate a FIM object using a simple 2 layer network for regression:
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py:974: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior. warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
Here is the model:
def __init__(self, params): super().__init__() self.model_ = nn.Sequential( nn.Linear(params['input_size'], params['hidden_size'], bias = False), nn.ReLU(), # when modelling non-linearities #nn.Dropout(params['dropout_p']), nn.Linear(params['hidden_size'], params['output_size'], bias = False) ) self.optim_ = torch.optim.Adam( self.model_.parameters(), lr=params['lr'] ) def forward(self, X): return self.model_(X)```
Here is the instantiation of the FIM object:
F = FIM(model=model, loader=trainloader, representation=PMatKFAC, variant = 'regression', n_output = 1, device= 'cpu')
I'm not sure what the warning is referring to, but since it is saying that a deprecated feature of PyTorch is being used, I think it is worth looking into?
Thanks for pointing this out!
This PR #27 should resolve the warning.
However it is possible that it breaks the backward compatibility with 2019 versions of PyTorch.
Thanks for the PR, can it be merged into main so I can pip install the updated version?
I just merged it!
Thomas