Backward behaviour of scatter_max
elias-ramzi opened this issue · comments
Elias Ramzi commented
Hi,
thank you for the great repo!
I had a question regarding the backward behaviour of scatter_max.
Is it similar to the one of torch.max
, i.e. only one index gets the gradient. Or like the one of torch.amax
, i.e. in case of equality all indexes get gradient?
This difference is explained for torch here: https://pytorch.org/docs/stable/generated/torch.amax.html.
Thank you for your help!
Matthias Fey commented
Only one index gets the gradient.
Elias Ramzi commented
Thanks!