pytorch / glow

Compiler for Neural Network hardware accelerators

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MaxPool is assuming same scale/offset on quantized element kind types?

et-nivard opened this issue · comments

AvgPool implementation on Interpreter is correctly handling different scale/offset in the input and output results. However, MaxPool seems to be assuming same scale/offset and results in inconsistent results in case scale/offset change between input and output operands. Is this an intended behavior? Maybe some constraint coming from expected quantization schema? If this is the case, is there any protection to detect invalid MaxPool instances?

Yeah, I think this was based on the expected quantization schema, where MaxPool is more like a select and only does comparisons to determine which elements to select, and doesn't do any scaling of the values. I think it's reasonable to add support for different output scale/offset if it's desired.

Thanks @jfix71, make sense.