fboulnois / stable-diffusion-docker

Run the official Stable Diffusion releases in a Docker container with txt2img, img2img, depth2img, pix2pix, upscale4x, and inpaint.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA out of memory

lsaudon opened this issue · comments

Hello, I have an other error.

./build.sh run 'a'

load pipeline start: 2022-10-03T17:08:45.114285
Fetching 19 files: 100%|██████████| 19/19 [00:00<00:00, 49528.76it/s]
ftfy or spacy is not installed using BERT BasicTokenizer instead of ftfy.
{'trained_betas'} was not found in config. Values will be initialized to default values.
loaded models after: 2022-10-03T17:08:59.515898
  0%|          | 0/51 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "/usr/local/bin/docker-entrypoint.py", line 174, in <module>
    main()
  File "/usr/local/bin/docker-entrypoint.py", line 157, in main
    stable_diffusion(
  File "/usr/local/bin/docker-entrypoint.py", line 54, in stable_diffusion
    images = pipe(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py", line 249, in __call__
    noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_2d_condition.py", line 234, in forward
    sample, res_samples = downsample_block(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/unet_blocks.py", line 537, in forward
    hidden_states = attn(hidden_states, context=encoder_hidden_states)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py", line 148, in forward
    x = block(x, context=context)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py", line 197, in forward
    x = self.attn1(self.norm1(x)) + x
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py", line 265, in forward
    hidden_states = self._attention(q, k, v, sequence_length, dim)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/attention.py", line 281, in _attention
    attn_slice = attn_slice.softmax(dim=-1)
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 8.00 GiB total capacity; 5.88 GiB already allocated; 0 bytes free; 6.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

You have 8GB of VRAM. You need to follow the instructions in the examples.