TensorStack-AI / OnnxStack

C# Stable Diffusion using ONNX Runtime

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

sdxl-turbo can't run

taotaow opened this issue · comments

commented

onnx model: https://huggingface.co/stabilityai/sdxl-turbo
"TokenizerType": "Both",
"ModelType": "Base",
"PipelineType": "StableDiffusionXL",

[2023/12/20 8:33:04] [Information] [ModelPickerControl] [LoadModel] - 'SDXLTurbo' Loading...
[2023/12/20 8:36:12] [Information] [ModelPickerControl] [LoadModel] - 'SDXLTurbo' Loaded., Elapsed: 188.1343sec
[2023/12/20 8:37:20] [Information] [StableDiffusionXLDiffuser] [DiffuseAsync] - Diffuse starting...
[2023/12/20 8:37:20] [Information] [StableDiffusionXLDiffuser] [DiffuseAsync] - Model: SDXLTurbo, Pipeline: StableDiffusionXL, Diffuser: TextToImage, Scheduler: EulerAncestral
[2023/12/20 8:37:22] [Error] [TextToImageView] Error during Generate
System.AggregateException: One or more errors occurred. ([ErrorCode:InvalidArgument] Got invalid dimensions for input: time_ids for the following indices
index: 1 Got: 12 Expected: 6
Please fix either the inputs or the model.)
---> Microsoft.ML.OnnxRuntime.OnnxRuntimeException: [ErrorCode:InvalidArgument] Got invalid dimensions for input: time_ids for the following indices
index: 1 Got: 12 Expected: 6
Please fix either the inputs or the model.
at Microsoft.ML.OnnxRuntime.InferenceSession.<>c__DisplayClass75_0.b__0(IReadOnlyCollection1 outputs, IntPtr status) --- End of stack trace from previous location --- at Microsoft.ML.OnnxRuntime.InferenceSession.RunAsync(RunOptions options, IReadOnlyCollection1 inputNames, IReadOnlyCollection1 inputValues, IReadOnlyCollection1 outputNames, IReadOnlyCollection1 outputValues) at OnnxStack.StableDiffusion.Diffusers.StableDiffusionXL.StableDiffusionXLDiffuser.SchedulerStepAsync(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, PromptEmbeddingsResult promptEmbeddings, Boolean performGuidance, Action2 progressCallback, CancellationToken cancellationToken)
at OnnxStack.StableDiffusion.Diffusers.DiffuserBase.DiffuseAsync(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, Action2 progressCallback, CancellationToken cancellationToken) at OnnxStack.StableDiffusion.Services.StableDiffusionService.DiffuseAsync(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, Action2 progress, CancellationToken cancellationToken)
at OnnxStack.StableDiffusion.Services.StableDiffusionService.GenerateAsync(StableDiffusionModelSet model, PromptOptions prompt, SchedulerOptions options, Action2 progressCallback, CancellationToken cancellationToken) --- End of inner exception stack trace --- at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification)
at OnnxStack.StableDiffusion.Services.StableDiffusionService.<>c.b__15_0(Task1 t) at System.Threading.Tasks.ContinuationResultTaskFromResultTask2.InnerInvoke()
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.ExecutionContext.RunFromThreadPoolDispatchLoop(Thread threadPoolThread, ExecutionContext executionContext, ContextCallback callback, Object state)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
at OnnxStack.StableDiffusion.Services.StableDiffusionService.GenerateAsBytesAsync(StableDiffusionModelSet model, PromptOptions prompt, SchedulerOptions options, Action`2 progressCallback, CancellationToken cancellationToken)
at OnnxStack.UI.Views.TextToImageView.ExecuteStableDiffusion(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, BatchOptions batchOptions)+MoveNext() in D:\Repositories\OnnxStack\OnnxStack.UI\Views\TextToImageView.xaml.cs:line 308
at OnnxStack.UI.Views.TextToImageView.ExecuteStableDiffusion(StableDiffusionModelSet modelOptions, PromptOptions promptOptions, SchedulerOptions schedulerOptions, BatchOptions batchOptions)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()
at OnnxStack.UI.Views.TextToImageView.Generate() in D:\Repositories\OnnxStack\OnnxStack.UI\Views\TextToImageView.xaml.cs:line 193
at OnnxStack.UI.Views.TextToImageView.Generate() in D:\Repositories\OnnxStack\OnnxStack.UI\Views\TextToImageView.xaml.cs:line 193

You may have to set GuidanceScale = 0, I don't think SDXL supports Classifier free guidance

Also Width/Height = 512 as its not a true SDXL size

commented

set GuidanceScale = 0,steps = 1
got noise image

Ok thank you, I will remove SDXL Turbo as a supported model, I assumed it would be the same as SDXL

I wanted to chime in and mention that I was messing around with SDXL-Turbo and was able to get it to generate with these settings using the LatentConsistencyXL pipeline and LCM Scheduler:

{ "Name": "SDXL-Turbo", "IsEnabled": true, "SampleSize": 1024, "PipelineType": "LatentConsistencyXL", "Diffusers": [ "TextToImage", "ImageToImage", "ImageInpaintLegacy" ], "DeviceId": 0, "InterOpNumThreads": 0, "IntraOpNumThreads": 0, "ExecutionMode": "ORT_SEQUENTIAL", "ExecutionProvider": "Cpu", "TokenizerConfig": { "PadTokenId": 49407, "BlankTokenId": 49407, "TokenizerLimit": 77, "TokenizerLength": 768, "OnnxModelPath": "cliptokenizer.onnx" }, "Tokenizer2Config": { "PadTokenId": 1, "BlankTokenId": 49407, "TokenizerLimit": 77, "TokenizerLength": 1280, "OnnxModelPath": "cliptokenizer.onnx" },

   And here is a sample image using these parameters:
   512x512
   4 Inference Steps
   0 Guidance scale

SDXL-Turbo_LCM

I haven't had any success with 1 step generation.
Little success with 2 step
Great success with 4 step