v100 cannot calculate bf16
ganganngannn opened this issue · comments
ganganngannn commented
Hello, how do I modify the configuration if I want to use it on 32G v100?
Jari Hiltunen commented
Do you mean run.py line 39 error? I checked how automatic1111 .devices.py is structured and I tried:
device = "cuda"
dtype = torch.float16
.
.
.
with torch.cuda.amp.autocast(dtype=torch.float16):
and with 32, but results are bad.
Furkan Gözükara commented
we added fp16 support fully working : #125