voicepaw / so-vits-svc-fork

so-vits-svc fork with realtime support, improved interface and more features.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support AMD GPUs on Windows

34j opened this issue · comments

commented

Is your feature request related to a problem? Please describe.
AMD GPUs not supported on Windows

Describe the solution you'd like
AMD GPUs not supported on Windows

Additional context

I'm trying to get this version working. I've installed the CPU version of torch because there is not installation for ROCm on Win.
After installing with pip install -U git+https://github.com/34j/so-vits-svc-fork.git@feat/openml this error occurs after using svc train

Traceback (most recent call last):
  File "C:\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\Scripts\svc.exe\__main__.py", line 7, in <module>
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\click\core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fork\__main__.py", line 130, in train
    train(config_path=config_path, model_path=model_path)
  File "D:\Files\Code\python\so-vits-svc-fork\venv\lib\site-packages\so_vits_svc_fork\train.py", line 41, in train
    raise RuntimeError("CUDA is not available.")
RuntimeError: CUDA is not available.

After commenting the two lines in train.py

    #if not torch.cuda.is_available():
        #raise RuntimeError("CUDA is not available.")

This is the output of the command. The training does not start at all.

(venv) PS D:\Files\Code\python\so-vits-svc-fork> svc train
[17:19:29] INFO     [17:19:29] Version: 1.3.3                                                                                                                                                   __main__.py:49
Downloading D_0.pth: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 178M/178M [00:04<00:00, 41.2MiB/s]
Downloading G_0.pth: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 172M/172M [00:03<00:00, 47.8MiB/s]
(venv) PS D:\Files\Code\python\so-vits-svc-fork> 
commented

Could you remove that part, pip install torch-openml and try again?

Nothing found, I tried with pip install openml-pytorch but nothing changed

commented

Understood. I failed.

Sure I did it, in fact no CUDA error is printed, but the command does nothing, it ends straight away

commented

Did svc pre-hubert work correctly?

Yes hubert worked but no GPU was used

But in this branch realtime inference works on GPU

edit.
Not really

commented

@allcontributors add pierluigizagaria userTesting

commented

It seems difficult to support, so I give up.

Can this project work with 3.11? I'm trying to install torch-mlir that should make torch compatible my AMD GPU on Windows

I've already tried using torch-directml but got the error mentioned here microsoft/DirectML#400

commented
> pipdeptree --reverse --packages llvmlite
Warning!!! Possibly conflicting dependencies found:
* poetry==1.4.2
 - platformdirs [required: >=2.5.2,<3.0.0, installed: 3.2.0]
------------------------------------------------------------------------
Warning!! Cyclic dependencies found:
* poetry-plugin-export => poetry => poetry-plugin-export
* poetry => poetry-plugin-export => poetry
------------------------------------------------------------------------
llvmlite==0.39.1
  - numba==0.56.4 [requires: llvmlite>=0.39.0dev0,<0.40]
    - librosa==0.9.1 [requires: numba>=0.45.1]
      - so-vits-svc-fork==3.0.4 [requires: librosa]
      - torchcrepe==0.0.18 [requires: librosa==0.9.1]
        - so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
    - resampy==0.4.2 [requires: numba>=0.53]
      - librosa==0.9.1 [requires: resampy>=0.2.2]
        - so-vits-svc-fork==3.0.4 [requires: librosa]
        - torchcrepe==0.0.18 [requires: librosa==0.9.1]
          - so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]
      - scikit-maad==1.3.12 [requires: resampy>=0.2]
        - so-vits-svc-fork==3.0.4 [requires: scikit-maad]
      - torchcrepe==0.0.18 [requires: resampy]
        - so-vits-svc-fork==3.0.4 [requires: torchcrepe>=0.0.17]

3.10 is not supported for the above reasons, but won't it work with 3.11?

I got an error while trying to install on 3.11

commented

Sorry, my typo, I was trying to ask if torch-mlir would work with 3.10.

They don't provide compiled Windows versions on 3.10

commented

Since both inference and training rely on librosa as of now, 3.11 support is not possible.

commented

Installing the rc version of numba may allow it to be used with Python 3.11, but may cause other problems (I haven't tried it)
(numba/numba#8841)

Would it be possible to run this using directml? (although I've only gotten directml to work on python 3.10.6, haven't tried it on newer versions)

commented

microsoft/DirectML#400

any update on this?