kuprel / min-dalle

min(DALL·E) is a fast, minimal port of DALL·E Mini to PyTorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

CUDA out of memory

AndreyRGW opened this issue · comments

RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 6.00 GiB total capacity; 5.32 GiB already allocated; 0 bytes free; 5.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Is it possible to run this neural network on a card with six gigs of vram, somehow?

Not sure if it will run on 6GB, but you could try setting is_reusable=False

Not sure if it will run on 6GB, but you could try setting is_reusable=False

Nah, that didn't work. Okay, thanks for trying to help.

Is it possible to run this neural network on a card with six gigs of vram, somehow?

Yes, mini version of this model requires only about 2.4 Gb of VRAM per single image generation (with is_reusable=False), so you should be able to run it with parameters --no-mega --grid-size=1 like this:
python image_from_text.py --no-mega --grid-size=1 --text="Succubus from world of warcraft, fantasy art."
You can even use --grid-size=3 and it still uses only 4.1 Gb of VRAM.