Stability-AI / generative-models

Generative Models by Stability AI

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

img2vid can we use multi GPUs to speed up inference?

khayamgondal opened this issue · comments

Inference takes about 30 minutes for img2vid. Wondering is there a way to leverage multiple GPUs to improve speed? I have 8x 100 GPUs

Currently running using diffusers pipeline

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("local/path/stable-video-diffusion-img2vid-xt-1-1")
pipeline(image)

same question

same question

use CUDA_VISIBLE_DEVICES=3,4 python xxx, just use gpu 3