TensorStack-AI / OnnxStack

C# Stable Diffusion using ONNX Runtime

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

LCM Lora Models run error because of unet had no 4 input

taotaow opened this issue · comments

commented

when run with lcm lora fused models,it will run error.error source line:
https://github.com/saddam213/OnnxStack/blob/750506d7014d8db0bc6a3a8cca3097ba24603f36/OnnxStack.StableDiffusion/Diffusers/LatentConsistency/LatentConsistencyDiffuser.cs#L130C25-L130C80

it need add some check,change to below code will be ok :
if(metadata.Inputs.Count>3)
{
inferenceParameters.AddInputTensor(guidanceEmbeddings);
}

Edit: Ignore previous advice, I was wrong

Do you have a link to the model you are using?

Pushed a commit if you would like to test
3808599

commented

OnnxStack.StableDiffusion\Diffusers\LatentConsistency\InpaintLegacyDiffuser.cs had the same issue
lcm lora support text2img/img2img/inpait,check https://huggingface.co/latent-consistency/lcm-lora-sdv1-5

steps to make lcm lora models
1.fuse stable diffusionmodel and lcm loar
from diffusers import DiffusionPipeline, LCMScheduler
import torch
pipe = DiffusionPipeline.from_pretrained("Lykon/dreamshaper-8",torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipe.fuse_lora()
pipe.save_pretrained("D:/lcm-dreamshaper-8")

2.export to onnx
from optimum.onnxruntime import ORTStableDiffusionPipeline
pipeline = ORTStableDiffusionPipeline.from_pretrained("D:/lcm-dreamshaper-8",export=True)
pipeline.save_pretrained("D:/lcm-dreamshaper-8-onnx")

3.test the onnx model,it will success
from optimum.onnxruntime import ORTStableDiffusionPipeline
pipe = ORTStableDiffusionPipeline.from_pretrained("D:/lcm-dreamshaper-8-onnx")
pipe.safety_checker = None
prompt = "1girl"
image = pipe(prompt, num_inference_steps=4,guidance_scale=1.0).images[0]
image.save("girl.png")

4.test with onnxstack.ui.exe,it will run error,chang that line will ok
@saddam213

Unfortunately I don't have the ability to make models, I don't have a python environment

I will have to wait until there are some public ones unfortunately

I am unable to support this model type as I have no access to a model

Closing as unfixable

Hello! I've been trying to get various LCM models running here on the app lately. However I've encountered this same issue (I think) with a bunch of models (Mostly while trying to convert some myself).
Here are some of the things I've noticed:

  • It seems to occur whenever I try to inference an onnx model that doesn't have a "timestep_cond", or 4th input.

  • I can bypass this error by setting the pipeline to regular SD instead of LCM, but that makes it so that i can't use the LCM Scheduler, which ends up messing up the generation and creating artifacts. That of course, is less than ideal.

  • I don't know if i'm right but, something that I noticed is that the models that failed were all merged with the lcm-lora-sdv1-5 lora, and that they had "time_cond_proj_dim": null in the unet's config.json file. However, the only model that i got working so far, LCM-Dreamshaper-V7-ONNX has "time_cond_proj_dim": 256 and probably isn't a merge (I'm no expert so take this whole paragraph with a grain of salt).

  • This onnx model is an example of this. It's a stable diffusion model merged with an LCM lora.
    I'm using the DirectML backend, and also tried with the Cuda backend but to no avail.

I apologize for the amount of text and lacking english, but thanks for reading!

Hello! I've been trying to get various LCM models running here on the app lately. However I've encountered this same issue (I think) with a bunch of models (Mostly while trying to convert some myself). Here are some of the things I've noticed:

* It seems to occur whenever I try to inference an onnx model that doesn't have a "timestep_cond", or 4th input.

* I can bypass this error by setting the pipeline to regular SD instead of LCM, but that makes it so that i can't use the LCM Scheduler, which ends up messing up the generation and creating artifacts. That of course, is less than ideal.

* I don't know if i'm right but, something that I noticed is that the models that failed were all _merged_ with the lcm-lora-sdv1-5 lora, and that they had "time_cond_proj_dim": null in the unet's config.json file. However, the only model that i got working so far, [LCM-Dreamshaper-V7-ONNX](https://huggingface.co/TheyCallMeHex/LCM-Dreamshaper-V7-ONNX) has "time_cond_proj_dim": 256 and probably isn't a merge (I'm no expert so take this whole paragraph with a grain of salt).

* [This onnx model](https://huggingface.co/Disty0/LCM_SoteMix) is an example of this. It's a stable diffusion model merged with an LCM lora.
  I'm using the DirectML backend, and also tried with the Cuda backend but to no avail.

I apologize for the amount of text and lacking english, but thanks for reading!

Thank you for the link to an LCM LoRA Onnx Model

I will be able to add support now, I should be able to get this done today :)

Hello! I've been trying to get various LCM models running here on the app lately. However I've encountered this same issue (I think) with a bunch of models (Mostly while trying to convert some myself). Here are some of the things I've noticed:

* It seems to occur whenever I try to inference an onnx model that doesn't have a "timestep_cond", or 4th input.

* I can bypass this error by setting the pipeline to regular SD instead of LCM, but that makes it so that i can't use the LCM Scheduler, which ends up messing up the generation and creating artifacts. That of course, is less than ideal.

* I don't know if i'm right but, something that I noticed is that the models that failed were all _merged_ with the lcm-lora-sdv1-5 lora, and that they had "time_cond_proj_dim": null in the unet's config.json file. However, the only model that i got working so far, [LCM-Dreamshaper-V7-ONNX](https://huggingface.co/TheyCallMeHex/LCM-Dreamshaper-V7-ONNX) has "time_cond_proj_dim": 256 and probably isn't a merge (I'm no expert so take this whole paragraph with a grain of salt).

* [This onnx model](https://huggingface.co/Disty0/LCM_SoteMix) is an example of this. It's a stable diffusion model merged with an LCM lora.
  I'm using the DirectML backend, and also tried with the Cuda backend but to no avail.

I apologize for the amount of text and lacking english, but thanks for reading!

OnnxStack UI and Amuse UI have been updated and now support LCM LoRA models
https://github.com/saddam213/OnnxStack/releases/tag/v0.17.0
https://github.com/Stackyard-AI/Amuse/releases/tag/v1.1.2

Thanks again for the model file link :)

OnnxStack UI and Amuse UI have been updated and now support LCM LoRA models https://github.com/saddam213/OnnxStack/releases/tag/v0.17.0 https://github.com/Stackyard-AI/Amuse/releases/tag/v1.1.2

Thanks again for the model file link :)

Thank you!!!! I've just downloaded the UI and tested it with a model that i converted myself; It appears to work!
That was quick, thanks! 🎉