mozilla / DeepSpeech

DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

run transcribe.py over gpu

ros-packages opened this issue · comments

Hello
I want to transcribe a large number of files with deepspeech and I want to use Tensorflow but apparently, transcribe.py does not use GPU and it fills all my CPU cores.
Is there a way for transcribe.py to use GPU?
Is there any way to run pb file instead of checkpoints in transcribe.py?
thanks.

Take a look at issue #3693