lliai / EMQ-series

[ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MWA4(The bit width of the weights is mixed, the bit width of the activation is 4) quantization problem of resnet18

maibaodexiaohangjiaya opened this issue · comments

I used the model (resnet18) provided by the project and the default parameters of the code to run MWA4 quantization. The results are quite different from those in the paper. Are there any other specific settings required to activate quantization to 4 bits?