xboot / libonnx

A lightweight, portable pure C99 onnx inference engine for embedded devices with hardware acceleration support.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tensorflow model with opset 12 seems to crash when loaded

Planet-Patrick opened this issue · comments

I have a model converted from Tensorflow that uses opset 12. (using tf2onnx.convert)
The model opens fine in Netron and elsewhere but crashes somewhere in Concat_reshape when I try to load it with onnx_context_alloc_from_file. I tried compiling for both x86 and x64 with the same result.

Here are the model properties as viewed through Netron:
image

Opening the models supplied in the libonnx test directory seemed to work fine. Do you have any suggestions for how to get this working? Thanks.

@jianjunjiang Thanks. My model does indeed contain LSTM which is not supported on there. I thought that libonnx supported all of opset 14 from the readme but now I take it that it's not the case. Is that correct? So libonnx in fact only supports a subset of opset 14?

@jianjunjiang Actually it looks like there's no actual LSTM in the exported model but instead constituents like Loop which i see is also not supported.

@jianjunjiang Is there an easy way to see when libonnx hits an unsupported operator? I'm considering finding all of the missing ones and trying to add them myself.

In this document If the operator is not checked, it means it is not supported. Thanks for your research。
the-supported-operator-table