xenova / whisper-web

ML-powered speech recognition directly in your browser

Home Page:https://hf.co/spaces/Xenova/whisper-web

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Streaming support

matbee-eth opened this issue · comments

commented

Have you thought about / planned a way to support streaming audio instead of sending the entire audio clip? If its not currently supported, how would you solve it? I would appreciate some guidance to send a proper PR to support streaming, if possible.

At the moment (w/ WASM backend), the latency for the encoder is just too much to do real-time streaming. Fortunately, the onnxruntime-web team have been busy improving their WebGPU backend, and it's at a stage where we can do testing with it now.

So, we hope to add support for it soon! If you're up for the challenge, you can fork transformers.js, build onnxruntime-web from source w/ webgpu support, and replace the import with the custom onnxruntime-web build.

commented

By real-time I actually just mean streaming of an audio source (mic-in, generic audio out, file data, etc) -> whisper, making it as real-time as the tech allows, really. So basically, chunk up an audio to ~5-30 seconds each, and simply queue up the chunks to transcribe

I'll look into the webgpu, I was planning on seeing if this project would work with webgpu, so I'll take a look

Well yes, you can just send 30-second chunks of audio to whisper, but as stated above, you won't get a response for at least 1-2 seconds due to the encoder latency. Then on top of that, depending how much you are decoding, you'd have to wait for the full chunk to be decoded before merging with the current predicted text.

That said, I do think this will be feasible with webgpu, at which point I'll probably take a look at this (unless you'd be interested in starting, working with the wasm backend for now).