replicate / cog-stable-diffusion

Diffusers Stable Diffusion as a Cog model

Home Page:https://replicate.com/stability-ai/stable-diffusion

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA out of memory - SD2.1

anotherjesse opened this issue · comments

When asking for 4 outputs - with everything else the default, I sometimes get:

Output
CUDA out of memory. Tried to allocate 12.66 GiB (GPU 0; 39.59 GiB total capacity; 19.58 GiB already allocated; 5.69 GiB free; 32.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Antidotally, this occurred after I triggered an NSFW exception on the API.

Perhaps throwing an exception doesn't allow torch to reclaim GPU memory - or perhaps it is unrelated

I've seen a lot of user reports about this, going back to November. Maybe this issue is not unique to 2.1?