export LD_LIBRARY_PATH when nvidia-cudnn-cu11 is installed via pip in the venv
ruzko opened this issue · comments
The GPU models need cuda, cuda11 for the time being.
for some reason, even though the libraries are installed as dependencies in the virtualenv, both whisper_online.py
and whisper_online_server.py
require the path to the nvidia-cudnn-cu11.so
libraries to be exported.
(Only when using faster-whisper
, to my knowledge).
If others experience the error:
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Please make sure libcudnn_ops_infer.so.8 is in your library path!
[1] 16931 IOT instruction (core dumped) python3.10 whisper_streaming/whisper_online_server.py --min-chunk-size 3
It can be solved by running this before the python program
export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
hi,
I think I had the same issue, but I went through it without noticing. This is included in the "Follow their instructions" for faster-whisper install in README. But I will note this and maybe put it to FAQ/recommendations. Thanks.