google / live-transcribe-speech-engine

Live Transcribe is an Android application that provides real-time captioning for people who are deaf or hard of hearing. This repository contains the Android client libraries for communicating with Google's Cloud Speech API that are used in Live Transcribe.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is NDK really needed here?

nicksav opened this issue · comments

commented

Hey guys,

We are thinking to get this into Flutter app and then into Google Glass EE2 and wonder how to make it happen.
A lot of road blocks we got - with NDK. Do we really need it in this app?

Correct me if I am wrong but all NDK usage is because of the extra codecs implementation? So, if we just remove all codecs code, we should make it work?

Thanks
Nick

Hi Nick,

NDK here is used to build libopus/libogg encoding, it's fine to remove it but I wanna mention that the expense of cloud service access would increase. We have a patch for this earlier, please refer to #20 (comment).

Alex

commented

I am very sorry, but how to use that patch file?

Thanks
Nick

Sorry for late reply.

You can try the command git apply [PATCH_FILE] or refer to the code in the patch file and modify by yourself.
https://git-scm.com/docs/git-apply

commented

Hi Alex,

Are you sure that using a flac audio (libopus/libogg encoding) decrease the expense on Cloud Speech API? I didn't find any mentioned of this on GCP.

commented

btw, works like magic!

Hi Alex,

Are you sure that using a flac audio (libopus/libogg encoding) decrease the expense on Cloud Speech API? I didn't find any mentioned of this on GCP.

Sorry for wrong words, should be network bandwidth, not the GCP expense.