SHI-Labs / Prompt-Free-Diffusion

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024

Home Page:https://arxiv.org/abs/2305.16223

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi GPU

pribadihcr opened this issue · comments

Hi how to use multi gpu for inference. Thanks

just turn cache_examples = False, then you can run this on a single gpu