tensorflow / flutter-tflite

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issue with Unsupported Ops in TFLite Flutter Plugin

Elienvalleau opened this issue · comments

Hello,

I'm encountering an issue when trying to run inference with a TensorFlow Lite model in Flutter. The model utilizes some TensorFlow ops that are not supported by the standard TensorFlow Lite interpreter. The specific error message I'm receiving is:
Select TensorFlow op(s), included in the given model, is(are) not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.

I've ensured that the tensorflow-lite-select-tf-ops dependency is included in my Android build, but the error persists.
dependencies { implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:2.12.0' }

and here is how I converted my model
converter = tf.lite.TFLiteConverter.from_saved_model('saved_model')
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()

Is there any solution ?
Thanks

Hey. Have you tried that specific model with Android or iOS directly (without the Flutter layer on top of it) by chance? I have a feeling there's something specific about the model that just isn't convertable (tflite doesn't support all of the ops available in regular TensorFlow), so it might not be a go for this situation.

Hey @PaulTR, I haven't had the opportunity to try the TFLite model directly on Android or iOS, but it works very well when run directly in Python. To provide more context, it's an OCR model that was originally in PaddlePaddle, which I converted to ONNX, then to TensorFlow, and finally to TensorFlow Lite.
And the error also mention "Node number 407 (FlexConv2D) failed to prepare."
Thx

I had the same issue when i converted a Pytorch model to ONNX and then to Tensorflow. The model used was BERT base uncased

I had the same issue when i converted a Pytorch model to ONNX and then to Tensorflow. The model used was BERT base uncased

I honestly would be very surprised if it worked after you've converted between three separate frameworks and then into a TFLite version (which is a subset of TensorFlow) :) My guess is for yours you'd want to try to reduce some of those steps when creating the model.

Yeah but at present there's no way to run a torch model(specially big one's like BERT) on edge devices using flutter. So the route is Pytorch -> ONNC -> Tensorflow -> TfLite