Requires CUDA
lkraider opened this issue · comments
I am trying to run it on a MacBook Air M1.
After installing and trying to run, I get the error:
| /opt/homebrew/anaconda3/envs/audiogpt/lib/python3.8/site-packages/torch/cuda/__init__.py:211 in │
│ _lazy_init │
│ │
│ 208 │ │ │ │ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " │
│ 209 │ │ │ │ "multiprocessing, you must use the 'spawn' start method") │
│ 210 │ │ if not hasattr(torch._C, '_cuda_getDeviceCount'): │
│ ❱ 211 │ │ │ raise AssertionError("Torch not compiled with CUDA enabled") │
│ 212 │ │ if _cudart is None: │
│ 213 │ │ │ raise AssertionError( │
│ 214 │ │ │ │ "libcudart functions unavailable. It looks like you have a broken build? │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled
AudioGPT uses large scale models so its probably a good idea to use a GPU with CUDA Cores to speed up the process.
If your still having trouble and you dont mind long wait times, you can update the torch,device to the 'cpu' instead of 'cuda[0]' or"cuda[1]". I wouldnt recommend though, run the tool through GPU instance if all else fails
Replacing device="cuda:0"
with device="mps"
seems to work for initializing torch on the mac M1.