lucidrains / alphafold2

To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MSA attention gated problem

CiaoHe opened this issue · comments

Hi Phil:

Compare the 'gating' unit with the original algorithm:

gates = self.gating(x)

I thought gating here just missed a 'sigmoid()', please have a check.

Best,