pytorch / glow

Compiler for Neural Network hardware accelerators

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FullyConnected with quantized activations and float bias failing on Interpreter

et-nivard opened this issue · comments

Interpreter is reporting FullyConnected operators with quantized activations and float bias as supported: see https://github.com/pytorch/glow/blob/master/lib/Backends/Interpreter/Interpreter.cpp#L192. But when trying it I'm hitting the assert here https://github.com/pytorch/glow/blob/master/lib/Backends/Interpreter/InterpreterNodes.cpp#L315 because the implementation relies on template dispatchQuantizedWithAccumulationAndBiasImpl which does not support float bias.