rhasspy / larynx

End to end text to speech system using gruut and onnx

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA Does not appear to be working in docker container

LargoUsagi opened this issue · comments

Running the latest docker container with the nvidia container runtime nvidia-smi returns and shows the graphics card as available and ready.

image

You can run larynx from the command line inside of the container without error.
image

But as soon as you pass the cuda flag in

^C(.venv) root@larynx-dd4858485-t9dj2:/home/larynx/app/larynx# python -m larynx --cuda
Traceback (most recent call last):
  File "/usr/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/larynx/app/larynx/__main__.py", line 750, in <module>
    main()
  File "/home/larynx/app/larynx/__main__.py", line 66, in main
    import torch
ModuleNotFoundError: No module named 'torch'

Similar errors occur if you attempt to start the container with the cuda flag as an additional argument.

By executing into the container and using the venv that exists I was able to install torch and then run the command.

image

I believe the build container has an issue here https://github.com/rhasspy/larynx/blob/master/Dockerfile#L42 as my knowledge of python is limited it appears that the intent is to use a precompiled version of torch that you are providing, but it does not appear to actually be making it into the container.