hustvl / YOLOS

[NeurIPS 2021] You Only Look at One Sequence

Home Page:https://arxiv.org/abs/2106.00666

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can you explain why YOLOS-Small has 30 Million parameter while DeiT-S has 22 Million parameter

gaopengcuhk opened this issue · comments

As the title suggested

Hi @gaopengcuhk, thanks for your interest in our work and good question!

For the small- and base-sized model, the added parameters mainly come from positional embeddings (PE): we add randomly initialized (512 / 16) x (864 / 16) PE at every Transformer layer to align with the DETR settings initially. But later we find that interpolate the pre-trained first layer PE to a larger size only, i.e., (800 / 16) x (1344 / 16) and without adding other PEs in intermediate layers can strike a better accuracy & parameter tradeoff. I.e., 36.6 AP v.s. 36.1 AP & 24.6 M (22.1 M + 2.5 M 😄) v.s. 30.7 M (22.1 M+ 8.6 M 😭). The tiny-sized model adopts this configuration.

We have added a detailed description in the Appendix and we will submit it to the arxiv soon (next week, hopefully), the pre-trained model will also be released soon, please stay tuned :)

This issue won't be closed until we update our manuscript on arxiv.

Another question, why only add the prediction head on the last layer? Have you tried to add the prediction head to the last several layers like DETR?

Another question, why only add the prediction head on the last layer? Have you tried to add the prediction head to the last several layers like DETR?

Thanks for your valuable issue.
We have tried this configuration in our early study, which gives no improvements.

The reason we guess is: for DETR, the deep supervision works because the supervision is "deep enough". I.e., the decoders are stacked upon least 50 / 101 layers ResNet backbone and 6 layers Transformer encoders. While YOLOS with a much shallow network cannot benefit from deep supervision.

Another question, it seems like you add the position embedding to x every layer. While in Deit, only the first layer add position embedding, is this important in YOLOS?

Another question, it seems like you add the position embedding to x every layer. While in Deit, only the first layer add position embedding, is this important in YOLOS?

We have actually answered here: #3 (comment): YOLOS with only first layer PE added is better in terms of AP and parameter efficiency :)

Thank you very much for your reply.

This issue won't be closed until we update our manuscript on arxiv.

This issue won't be closed until we update our manuscript on arxiv.

We have updated our manuscript on arxiv, and as such I'm closing this issue. Let us know if you have further questions.