google / trax

Trax — Deep Learning with Clear Code and Speed

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does the Reformer have more parameters than the baseline?

alexm-gc opened this issue · comments

Regarding Reformer: paper | code

From paper:

.. show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x1 and x2 have size d_model.

I see how the parameters of Attention and MLP does not increase. But what about
(1) the embedding layer and
(2) the final projection layer?

Question 0. Why does the parameters of the initial embedding layer not increase if we double d_model?.