Giters
Vaibhavs10
/
insanely-fast-whisper
Geek Repo:
Geek Repo
Github PK Tool:
Github PK Tool
Stargazers:
6553
Watchers:
58
Issues:
167
Forks:
483
Vaibhavs10/insanely-fast-whisper Issues
Out Of memory & attempt to get argmin of an empty sequence
Updated
19 days ago
Comments count
2
Can I add - initial_prompt like whisper
Updated
22 days ago
Comments count
1
[Error] metadata-generation failed while trying to install flash-attention on google collab T4
Updated
24 days ago
Local file
Updated
25 days ago
pipx install insanely-fast-whisper outdated version
Updated
a month ago
Comments count
1
Timestamps are too tight when repetition_penalty is present
Updated
a month ago
Comments count
1
Could we talk in discord mp about subtitles
Updated
a month ago
Cuda Index Out of Bound error on GPU
Updated
a month ago
Getting `Use model.to('Cuda')` when trying to use Flash Attention
Updated
a month ago
question: client sdk
Updated
a month ago
The speed is slower than fast-whisper, is something wrong in my config?
Updated
a month ago
Comments count
1
MPS flash attention support
Updated
a month ago
Doesn't detect CUDA
Updated
a month ago
Comments count
3
Speaker Diarization
Updated
a month ago
Using with replicate.com
Updated
a month ago
What is the trade off in accuracy from fp32 to fp16
Updated
2 months ago
Тема
Closed
2 months ago
How is the accuracy and memory usage as compared to Faster Whisper?
Updated
2 months ago
Comments count
2
question on how this works?
Updated
2 months ago
Comments count
2
Error for Multiple languages audio
Updated
2 months ago
num-speakers not yet available for pipx installed package
Updated
2 months ago
Comments count
1
Random and inconsistent transcribe
Closed
3 months ago
Comments count
4
Is there a way to cap the max length of the transcription?
Updated
3 months ago
Use as python lib and release format.
Updated
3 months ago
Comments count
1
[FEATURE REQUEST] support audio input from microphones
Updated
3 months ago
Too high vRAM usage
Updated
3 months ago
is_flash_attn_2_available() returns False
Updated
3 months ago
api in replicate.com throws Prediction failed. invalid load key, '\x00'. error randomly
Updated
3 months ago
I run code in colab and output comes with terms like ''{"speakers": [], "chunks": [{"timestamp": [0.0, 130.4]'' I would like to transcribe only the text - Is this possible ? How?
Updated
3 months ago
Comments count
1
How about some honest speed/quality tests for a change
Updated
3 months ago
Can insanely-fast-whisper support real-time transcription with websockets?
Updated
3 months ago
Missing 0.0.13 on pipx?
Closed
3 months ago
Comments count
1
Cuda Out of Memory
Updated
3 months ago
Comments count
6
torch_dtype only for torch.float16?
Updated
3 months ago
FFMPEG is installed but it gives error showing its not.
Closed
4 months ago
Comments count
1
CLI Feature requests: 1) Output .srt files, 2) Sequentially process all audio files in directory
Updated
4 months ago
Comments count
1
pipx install fails on Mac OS Sonoma
Updated
4 months ago
Comments count
3
언어를 변경 할 수 있나요?
Closed
4 months ago
Comments count
1
Make timestamp more accurate
Closed
4 months ago
Word Timestamps
Closed
4 months ago
Issue on macOS Ventura - ProductVersion: 13.6.3 - BuildVersion: 22G436?
Updated
4 months ago
Comments count
2
In the benchmark table, what does the 'batching [24]' refer to?
Updated
4 months ago
Comments count
1
Is there an api to get the transcription progress?
Updated
4 months ago
Get !!!!! in output.json file
Updated
4 months ago
How to make model use 2 GPU?
Updated
4 months ago
Remove the model from VRAM
Updated
4 months ago
About "condition_on_previous_text"
Closed
4 months ago
Comments count
2
Add base64 inline audio
Updated
4 months ago
Comments count
2
Missing option to provide a prompt argument
Closed
4 months ago
Comments count
1
Way to generate output scores from the pipeline?
Closed
4 months ago
Comments count
1
Previous
Next