openai / consistencydecoder

Consistency Distilled Diff VAE

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Model Device Allocation Issue Affecting Parallel Computation

Vanint opened this issue · comments

Hello, I appreciate the work on the Consistency Decoder. I've run into an issue with the model from the repository. It's hard-coded to use torch.device("cuda:0"), which is problematic for parallel computation:

input = torch.to(features, torch.device("cuda:0"), 6)

This prevents the model from running on multiple GPUs. Could you suggest a way to modify the model to dynamically select the device, allowing for parallel GPU processing?

Thank you for your assistance.