Lyken17 / pytorch-OpCounter

Count the MACs / FLOPs of your PyTorch model.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Does MACs and FLOPs count correctly for and INT8 quantized model?

Abanoub-G opened this issue · comments

Hi,
I am trying to use the thop profile to measure MACs and FLOPs of a model before and after applying quantisation to the model.

  • Does the current implementation of measuring MACs count INT8 quantized parameters in a Quantized model or only floating points (FP)?

  • If the implementation counts both INT and FP, then the FLOPs calculation (FLOPs = 2 x MACs) based on this reply, will not be accurate as it counted both integer and floating point operations, whilst FLOPs should only count the FP operations?

  • In the other case where the implementation of MACs only counts FP and not INT, then the FLOPs calculation of (FLOPs = 2 x MACs) will be fine. But then how does one calculate the INT operations?

Any advice/feedback will be greatly appreciated.
Thanks

Sorry, currently we do not support density or quantized calculation. Might improve in the future.