google / gemmlowp

Low-precision matrix multiplication

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is this product range of int8*int8 in comment document expected?

zhenhuaw-me opened this issue · comments

Hi there,

In

// their range being [ -2^7 , 2^7 ), their products are in range
// [ -2^14 , 2^14 - 1 ), meaning that we can add two such values

I guess the product range should be [ (-2^7)*(2^7 - 1), (-2^7)*(-2^7)] which is in (-2^14, 2^14] - closed for 2^14? If we are putting a int8*int8 + int8*int8 in int16, do we need the assumption that -128 is not included in int8 (from B. Appendix: ARM NEON details of the paper)? To me, if int8 takes -128, the int8*int8 + int8*int8 could be as large as 2^15 which cannot be hold in int16.

Thanks

Looked into the tflite/gemmlowp stack. So, for quantized conv
https://github.com/tensorflow/tensorflow/blob/c865ec5621c013a7f8a4a26d380782e63117224f/tensorflow/lite/kernels/internal/optimized/optimized_ops.h#L2082-L2085 , which loads lhs (filter, value range $(0, 255]$) and rhs (input, value range $[0, 255]$), the int8*int8 + int8*int8 value size can hold in int16.

However, I am not sure how the input uint8 data

void Run(std::int32_t* dst_ptr, std::size_t dst_row_stride,
std::size_t dst_col_stride, const std::uint8_t* lhs_ptr,
const std::uint8_t* rhs_ptr, std::size_t start_depth,
std::size_t run_depth) const override {
which is loaded by
"ld1 {v4.16b}, [%[lhs_ptr]], #16\n"
can be computed by signed instructions like and
"smull v8.8h, v0.8b, v4.8b\n"
"smull v9.8h, v1.8b, v4.8b\n"

Would you please give a hint?

You are right that the comment at kernel_neon.h:708 is incorrect. It fails to mention that in order to avoid overflow in int16 := int8*int8 + int8*int8, it is necessary to require the int8 values to avoid the value -128.

As you found, this has been amended in the paper and in the way that TFLite uses this. As you found, there is a signedness discrepancy between on the one hand, the 8bit buffers in TFlite and at the API surface of gemmlowp, where everything is unsigned uint8, and in the kernels internally in gemmlowp, where everything is signed int8. The switch from unsigned to signed is implemented in the 'packing' phase of gemmlowp.

For the pack/compute/unpack phases of gemmlowp, refer to this doc:
https://github.com/google/gemmlowp/blob/master/doc/design.md
The portable (not NEON) implementation of the packing phase is this file:
https://github.com/google/gemmlowp/blob/master/internal/pack.h
Inside it, here is where the unsigned->signed conversion occurs:

gemmlowp/internal/pack.h

Lines 272 to 273 in 58825b1

const std::int16_t kernel_val_unwrapped =
src_val - kZeroPointInputValue;

Thank you @bjacob for the detailed knowledge sharing! That's really helpful! I didn't notice that there is a packing process in gemmlowp, should have read the docs carefully.