MAE decoder pos_emb
dnecho opened this issue · comments
李程 commented
Is it necessary add pos_emb to deocder_tokens?
decoder_tokens = decoder_tokens + self.decoder_pos_emb(unmasked_indices)
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
dnecho opened this issue · comments
Is it necessary add pos_emb to deocder_tokens?
decoder_tokens = decoder_tokens + self.decoder_pos_emb(unmasked_indices)