mozilla / DeepSpeech

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

Repository from Github https://github.commozilla/DeepSpeechRepository from Github https://github.commozilla/DeepSpeech

How do I prepare the input and the output for tensorflow lite using deepspeech.tflite model ?

Himly1 opened this issue · comments

I am new to TensorFlow so I want to know Is there any documentation or demo that describe how to use the deepspeech-0.9.3-models.tflite ?

I know how to load the tflite model with TensorFlow but I got no idea about prepare the input and the output for the model.
Here is the java code to load the model.

public void loadModel(Context context) throws Exception{
        AssetFileDescriptor fileDescriptor = context.getAssets().openFd("deepspeech-0.9.3-models.tflite");
        FileInputStream is = new FileInputStream(fileDescriptor.getFileDescriptor());
        FileChannel channel = is.getChannel();
        long startOffset = fileDescriptor.getStartOffset();
        long declaredLength = fileDescriptor.getDeclaredLength();
        MappedByteBuffer buffer = channel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);

        tflite = new Interpreter(buffer, new Interpreter.Options());
    }

But I got no idea how to prepare the input and the output for the function tflite.run()
Here is the definition of the function:

image

Any ideas? Thanks.