demo.mp4
This is a simple Streamlit UI for OpenAI's Whisper speech-to-text model. It let's you automatically select media by YouTube URL or select local files & then runs Whisper on them. Following that, it will display some basic analytics on the transcription. Feel free to send a PR if you want to add any more analytics or features!
This was built & tested on Python 3.9 but should also work on Python 3.7+ as with the original Whisper repo).
You'll need to install ffmpeg
on your system. Then, install the requirements with pip
.
pip install -r requirements.txt
Once you're set up, you can run the app with:
streamlit run 01_Transcribe.py
This will open a new tab in your browser with the app. You can then select a YouTube URL or local file & click "Run Whisper" to run the model on the selected media.
Whisper is licensed under MIT while Streamlit is licensed under Apache 2.0. Everything else is licensed under MIT.