facebookresearch / mae

PyTorch implementation of MAE https//arxiv.org/abs/2111.06377

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Query about Persistent Use of Masking during Inference and Finetuning in MAE

rlgnswk opened this issue · comments

Hello,

I have been reviewing the MAE implementation, and I noticed that during both inference and finetuning, the model continues to use the same mask ratio (0.75) as was used during pretraining. Could you clarify why the model does not encode the entire image instead of using a masking approach? I am curious about the advantages or the rationale behind continuing with this masking strategy post-pretraining.

Thank you for your insights!