lucidrains / RETRO-pytorch

Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Why are there so many position embeddings?

jasperhyp opened this issue · comments

Hi! Thanks for your great work, it's very helpful for my project! I was just curious why there are so many position embeddings. Essentially it looks like the sequence is also being added a (1 to n) pos emb initially in the RETRO class, and then in each attention module rotary embeddings are added again. I thought just two in the Attention and CCA would be quite enough. Thanks in advance!

one is absolute positional embedding, the other is relative positional embedding (you need the relative positional embeddings for the CCA to work well)

rotary embeddings is one of the strongest relative positional embeddings out there

Makes sense, thank you! And meanwhile only the sequence being modeled is added the absolute position embedding (the context/retrieved is not), is that also deliberate?

Also, another unrelated question: Just to confirm, sequences are already retrieved before training (both retrieval corpus and the training sequences are encoded by frozen BERT), is this correct?

@jasperhyp yup, that is correct

the retrieved content undergoes relative positional embedding during cross attention iirc

yes, the retrieval is done prior to training for efficiency

Thank you!