googlecodelabs / tensorflow-for-poets-2

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

AutoML TfLite Android Edge Device Tutorial: BufferOverflowException

tvanfossen opened this issue · comments

I've followed the steps in these tutorials to build a custom edge device model, exported and adapted the code as per the tutorials.

https://cloud.google.com/vision/automl/docs/edge-quickstart
https://cloud.google.com/vision/automl/docs/tflite-android-tutorial

The custom model I am trying to bring online from the tflite camera app sample has 100k+ images, 25 labels. App runs fine on the pretrained model that is cloned with the repo.

Sample app code has been adjusted per the tflite-android tutorial above.

Code fails with a BufferOverflowException inside convertBitmapToByteBuffer at :
imgData.putFloat((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);
imgData.putFloat((((val >> 8) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);
imgData.putFloat((((val) & 0xFF)-IMAGE_MEAN)/IMAGE_STD);

Any suggestions as to why this might be occurring?

commented

I have also faced the same issue, I changed the putFloat into put and typecasted the value passed into (byte) imgData.put((byte) ((((val >> 16) & 0xFF)-IMAGE_MEAN)/IMAGE_STD)); App stopped crashing but the output model predicts is very very wrong.

I used this tutorial instead: https://www.tensorflow.org/lite/models/image_classification/android

And it works no problem, just swap the tflite and txt file exported from AutoML into the assets folder again and change the ClassifierQuantizedModel to point to the custom file instead of the pregenerated ones.