nolanliou / mobile-deeplab-v3-plus

Deeplab-V3+ model with MobilenetV2/MobilenetV3 on TensorFlow for mobile deployment.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Using pretrained model in tflite form

wpmed92 opened this issue · comments

Hi!

I used the pretrained MobileNetV2 513x513 model and first analyzed it with summarize_graph tool which gave me the following output:

Found 1 possible inputs: (name=Input, type=float(1), shape=[?,513,513,3]) No variables spotted. Found 1 possible outputs: (name=Output, op=ExpandDims)

Then I converted this model to tflite, which was successful.

Then I tried to used that in my Android app with no success.

` private static final float IMAGE_MEAN = 128.0f;
private static final float IMAGE_STD = 128.0f;
private final static int INPUT_SIZE = 513;
private final static int NUM_CLASSES = 2;
private final static int COLOR_CHANNELS = 3;
private final static int BYTES_PER_POINT = 4;
...
mImageData = ByteBuffer.allocateDirect(
1 * INPUT_SIZE * INPUT_SIZE * COLOR_CHANNELS * BYTES_PER_POINT);
mImageData.order(ByteOrder.nativeOrder());

    mOutputs = ByteBuffer.allocateDirect(1 * INPUT_SIZE * INPUT_SIZE * NUM_CLASSES * BYTES_PER_POINT_OUT);

`

first there was a buffer size mismatch between my mOutputs buffer the output buffer from tensorflow lite.
What is the shape and the name of the output model?
Also, when I used the frozen inference graph without tflite conversion I still didn't have success. I used it in python. The output was all black, 0 person pixels.