asagi4 / comfyui-prompt-control

ComfyUI nodes for prompt editing and LoRA control

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LoRAScheduler output can differ from LoraLoader if both are used in the same workflow

JoyfulTemplate opened this issue · comments

The LoRAScheduler output image is different from normal Lora loader uses the same checkpoint. (Sometimes it occurs at the first run, sometimes occurs runs later after changing seed, prompt etc.)
different_result
Interestingly, if the LoRAScheduler uses a separate checkpoint loader then the results become the same.

This looks like unpatching the model after sampling isn't happening properly, but I'm not sure where the actual problem is. Maybe ComfyUI doesn't always unpatch models after sampling.

ComfyUI's model object is actually a thin wrapper over the "real" model object that contains the loaded weights, and applying LoRAs modifies those weights directly. The patcher maintains a backup of modified weights that should get restored when sampling is done. It seems that's not happening. It might be that ComfyUI is being smart and not "needlessly" unpatching a model that doesn't change.

With a separate checkpoint loader, the ModelPatcher objects will not share the same underlying weights, so you don't get problems like this.

Maybe I'll figure out a fix at some point, but for now I'd just suggest not mixing the LoRAScheduler or ScheduleToModel with LoRALoader

@asagi4 Thanks for the explanation, I tried use multiple LoRAScheduler in the same workflow and results look fine. So for now I'll only use LoRAScheduler if I need to schedule loras. Thanks again for bring this feature to comfyUI :)