xenova / transformers.js

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

Home Page:https://huggingface.co/docs/transformers.js

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Streaming support?

gduteaud opened this issue · comments

Feature request

Add support for streaming generated outputs. This appears to be supported in the transformers library: https://huggingface.co/docs/transformers/v4.38.2/en/generation_strategies#streaming

Motivation

Because outputs take a certain time to generate, it is desirable from a user experience standpoint to be able to display outputs "live" as they are being generated, as opposed to waiting until generation is completed before displaying the output.

Your contribution

I really wish I could but unfortunately this is well beyond my ability to implement.

You can use the callback_function generation parameter. For example:

return await pipeline(data.text, {
...data.generation,
callback_function: function (beams) {
const decodedText = pipeline.tokenizer.decode(beams[0].output_token_ids, {
skip_special_tokens: true,
})
self.postMessage({
type: 'update',
target: data.elementIdToUpdate,
data: decodedText
});
}
})
}

🤯 oh wow that's amazing, thank you!!

In typescript, the options give an error if you have callback_function in options.