google / XNNPACK

High-efficiency floating-point neural network inference operators for mobile, server, and Web

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Regarding x8-lut op

Darshcg opened this issue · comments

Hi XNNPACK team,

I want to understand what exactly the x8-lut op does. Can I get a small description of this op please?

Thanks

There's no X8-LUT operator in XNNPACK, but there's a corresponding microkernel. This microkernel maps a vector of 8-bit inputs to a vector of 8-bit outputs using a 256-entry table lookup.

Hi @Maratyszcza,

Thanks a lot for your reply. I got the clarity now. My last question: where exactly this microkernel is being used? whether it is used to compute the Convolution?

Thank you

Hi @Maratyszcza,

Looking forward to your reply.

Thank you

It is used for unary elementwise operations (e.g. sigmoid, tanh, elu) on 8-bit quantized data.

Thank you for your response @Maratyszcza!