luost26 / RDE-PPI

:mountain: Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction (ICLR 2023)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about the spatial attention implementation

chocolate9624 opened this issue · comments

Hello, first of all many thanks for providing the source code alongside the paper.

logits_spatial = self._spatial_logits(R, t, x) modules/encoders/attn.py
As the quote shows, I find the input variable x for self._spatial_logits is the hidden representation from amino acid types and dihedral features, rather atom coordinates.

Since it is not spatial related input, how can we get spatial related logit? What is the meaning of the sub functions in _spatial_logits, like local_to_global and global_to_local?

Could you help me understand this problem? Or am I misunderstanding something?

Thank you very much.

commented

@chocolate9624
In my view, although the initialized x does not contain any spatial information, the x can be updated by pair-wise spatial information from z. You can look the follow code:

        for block in self.blocks:
            res_feat = block(R, t, res_feat, pair_feat, mask)

The variable res_feat can be updated by pair_feat. So after several iterations, the res_feat (or x) should contain the spatial information.