google / gemmlowp

Low-precision matrix multiplication

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

there is a problem about the result_scale and result_zero_point?

victorygogogo opened this issue · comments

I saw the doc/quantization_example.cc.

there is a problem about the result_scale and result_zero_point?

how to make sure about it ?

you count the real result ,them calculate it ?

but if we all do it every time in the real net work,the speed is lower obviously?

who can help me ,solve the problem ??

maybe out paper gives more context.
https://arxiv.org/abs/1712.05877

@bjacob
Hi Benoit,
I read the paper you mentioned, but I still have the same question.

result_quantized_value = result_zero_point + (lhs_scale * rhs_scale / result_scale) * Sum_over_i( (lhs_quantized_value[i] - lhs_zero_point) * (rhs_quantized_value[i] - rhs_zero_point) ) (5)

The above equation is the basic scheme to calculate the quantized matrix multiplication. Since the input matrices are given, lhs_scale*rhs_scale, and Sum_over parts are easy to compute. But how to calculate result_scale and result_zero_point is not well described in both paper and gemmlowp documents.
Assume the result quantized value has 8 bits, my guess is

255 = result_quantized_value_max = result_zero_point + (lhs_scale * rhs_scale / result_scale) *Sum_over_i_max (a)

and

0 = result_quantized_value_min = result_zero_point + (lhs_scale * rhs_scale / result_scale) *Sum_over_i_min (b)

(a) -(b), we can get:

255 = (lhs_scale * rhs_scale / result_scale) *(Sum_over_i_max - Sum_over_i_min) (c)

Then,

result_scale = (lhs_scale * rhs_scale / 255) *(Sum_over_i_max - Sum_over_i_min)

Since Sum_over_i_max and Sum_over_i_min can be calculated, the result_scale can be got from the above equation. Is it correct and is it the way you used for calculating the result_scale and result_zero_point? Thank you so much.

The result scale and zero_point are not to be inferred from the inputs scale and zero point, that 's why neither our example code nor paper give formulas for that. There is no formula for that.

Instead, the quantization parameters of the result must be given by the user.

In a typical quantized neural network application, as in our paper, it is the training process that will record the min-max used for each matrix, including for the result matrix. The quantization and inference process will then use that pre-recorded min-max to quantize the result matrix.

@bjacob
Thanks Benoit, is there any pretrained quantized model such as mobilenet that contains scales and zeropoints?

I think there is, explore around
https://www.tensorflow.org/mobile/tflite/
and maybe ask on the issue tracker there if it's not obvious.

@bjacob hello
From your paper https://arxiv.org/abs/1712.05877 I get that
During the training with simulated quantization, you only quantized the weights and activations, so we can get the corresponding scale and zero_point

(1)could you tell me how to get the result scale and zero_point during training process?
Is it right that to inference the model without to be quantized and collect [a; b] ranges about the result and deal with it just like deal the activations during the Training with simulated quantization?

you said that "The quantization and inference process will then use that pre-recorded min-max to quantize the result matrix."
(2)How to ensure that the quantized model with pre-recorded min-max has generalization ability?

thanks a lot, good luck to you @bjacob

Redirecting these questions to @skligys who wrote Section 3 of this paper on training and is generally the training expert :-)

same question,i have trained a quantized model in tf object object API, but when i get the global variables in the ".ckpt", i only found the weight_min/max and the min/max after relu6 (0 /5.9997) ,there is not output min/max of conv ,why?
the name of min/max tensor like that:

FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/min:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/max:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/min/biased:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/min/local_step:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/max/biased:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/act_quant/max/local_step:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/weights_quant/min:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/weights_quant/max:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/act_quant/min:0
FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_1_depthwise/act_quant/max:0