count_normalization is only correct for batch_norm. wrong flops count for layernorm
pzpzpzp2 opened this issue · comments
pzpzpzp2 commented
pytorch-OpCounter/thop/profile.py
Line 32 in 43c064a
The same count_normalization function is used for every norm-esque module but batchnorms store an estimate mean and stdev, while layernorms calculate them at inference time. Shouldn't layernorms account for the cost of evaluating mean and stdev? The difference is pretty significant:
The mean is n flops, stdev is 2n more flops? and thats before the rest of the norm module which is another 2n.
Is there a reason layernorms should be estimateable as only 2n flops by re-using batchnorm's estimate?
dyhBUPT commented
I think you are right.
BTW, I think the MACs of BN (eval, no affine) should be n, not 2n in codes.
pytorch-OpCounter/thop/vision/basic_hooks.py
Lines 60 to 69 in 43c064a
pytorch-OpCounter/thop/vision/calc_func.py
Lines 43 to 45 in 43c064a