Mobilenetv2 - Quantization Aware Training - Low Accuracy
kg512 opened this issue · comments
I am doing quantization aware training for Mobilenetv2 using TF Slim (TF - 1.15).
For the full model (without quantization) I get an accuracy of 80% with 1200 steps while the quantization aware trained model is still at 40% accuracy after 40,000 steps.
In order to activate quantization aware training, I set --quantize_delay=1. Do I need to do something else??
While looking for this issue, I found this : tensorflow/model-optimization#368. Will this issue affect tf-slim as well?
I am trying to use tf-slim to generate a quantization aware model with uint8 input as TF 2 only supports float input for quantization aware models.