rusty1s / pytorch_scatter

PyTorch Extension Library of Optimized Scatter Operations

Home Page:https://pytorch-scatter.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Backward behaviour of scatter_max

elias-ramzi opened this issue · comments

Hi,

thank you for the great repo!

I had a question regarding the backward behaviour of scatter_max.
Is it similar to the one of torch.max, i.e. only one index gets the gradient. Or like the one of torch.amax, i.e. in case of equality all indexes get gradient?

This difference is explained for torch here: https://pytorch.org/docs/stable/generated/torch.amax.html.

Thank you for your help!

Only one index gets the gradient.

Thanks!