clovaai / rexnet

Official Pytorch implementation of ReXNet (Rank eXpansion Network) with pretrained models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Counting FLOPs

jahongir7174 opened this issue · comments

Thanks for sharing your wonderful work.

I am curious about counting FLOPs.
I found this but It shows higher FLOPs when I use HardSwish instead of Swish.
Can you share your FLOPs counting script?

Thank you very much

Sorry for the very late reply.

I am using this flop-counter, and let the computational cost of the activation and pooling layers be zeroed following the convention.

Thanks for your reply.

As I know, counting FLOPS of EfficientNet considers the activation function.
Can you share any reference for ignoring the FLOPS of the activation function?

Thank you very much

@jahongir7174

MobileNetV2 and V3 can be the references you find. Computing the FLOPs considering all the elements including activations cannot reach the FLOPs reported in the original paper (e.g., MobileNetV2 1.0 has 300 MMac in the paper, but one may not reach the reported FLOPs when involving the activations in computation)

However, as preceding models such as Vision Transformers were developed, the way of calculating FLOPs was not unified, so it seems that a small difference in FLOPs of an identical model may exist.