facebookresearch / mae

PyTorch implementation of MAE https//arxiv.org/abs/2111.06377

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about PatchEmbed's Initialization Trick

tae-mo opened this issue · comments

Hi, thanks for sharing your awesome work.
In the models_mae.py, the initialization of the PatchEmbed’s conv be like:

# initialize patch_embed like nn.Linear (instead of nn.Conv2d)
w = self.patch_embed.proj.weight.data
torch.nn.init.xavier_uniform_(w.view([w.shape[0], -1]))

As written in the comment, weights of the conv are intentionally flatten before its initialization.
So why did you make the conv weights into the shape of nn.Linear’s?
Is there any advantage of doing so?

Thanks in advance.

Hi, I also have the same question. Any thoughts now?