LoRAScheduler output can differ from LoraLoader if both are used in the same workflow
JoyfulTemplate opened this issue · comments
This looks like unpatching the model after sampling isn't happening properly, but I'm not sure where the actual problem is. Maybe ComfyUI doesn't always unpatch models after sampling.
ComfyUI's model object is actually a thin wrapper over the "real" model object that contains the loaded weights, and applying LoRAs modifies those weights directly. The patcher maintains a backup of modified weights that should get restored when sampling is done. It seems that's not happening. It might be that ComfyUI is being smart and not "needlessly" unpatching a model that doesn't change.
With a separate checkpoint loader, the ModelPatcher objects will not share the same underlying weights, so you don't get problems like this.
Maybe I'll figure out a fix at some point, but for now I'd just suggest not mixing the LoRAScheduler
or ScheduleToModel
with LoRALoader