I have an AMD graphics card, hope someone could help me.
Reshex opened this issue · comments
Whenever I am running gui.bat i am getting this error:
Pipelines loaded with dtype=torch.float16
cannot run with cpu
device. It is not recommended to move them to cpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16
argument, or use another device for inference.|
Beacuse i am running an AMD gpu the "cuda" option inside the torch_dtype function does not work for me. but sadly i don't know what to change it into.
Moreover when i am going inside my local url and trying to generate an image a different error message pops in terminal:
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
hope somebody could help me please <3
Hope too..
Hope too..
Does it mean that there is no option for AMD GPU to run this?
Does it mean that there is no option for AMD GPU to run this?
AMD 7900 XT/XTX runs code from this repo without any source modifications and they are detected as cuda
compatible in PyTorch.
Does it mean that there is no option for AMD GPU to run this?
AMD 7900 XT/XTX runs code from this repo without any source modifications and they are detected as
cuda
compatible in PyTorch.
I have 6900 XT, Does it mean i have to use modifications? and if so what are those in order for it to work?
Does PyTorch detects your card as cuda
? If yes, then it should be compatible.
Something like:
>>> import torch
print(torch.cuda.is_available())
Does PyTorch detects your card as
cuda
? If yes, then it should be compatible.Something like:
>>> import torch print(torch.cuda.is_available())
I don't think that pytorch detects my card as cuda. That is the main problem