TensorStack-AI / OnnxStack

C# Stable Diffusion using ONNX Runtime

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to use with onnx models that doesnt have tokenizer model onnx file

patientx opened this issue · comments

Only worked with lms model , other models load with tokenizer on app directory "load" but doesnt do anything

commented

there is one cliptokenizer.onnx in onnxstack folder, i think u can use it for all of the models, just rename it to model.onnx and put it in toknizer folder and test it out
EDIT= it uses it by default as u can see in settings, so u dont need to do anything, just select unet and the others and u r good to go

commented

u should get the new version, there is a high chance fp16 support is fixed ur issue, i had same issue it loaded lcm fp16 but didn't generate anything, now it is fixed.
maybe u should tell about the model u r using, is it fp32 or 16? how did u converted it, and the guys here will help u

I've written this to discussion but let me rewrite here, It seems there is one other difference with my local models, one big model.onnx file instead of small model.onnx and a large model.onnx_data file which is present on downloaded models.

commented

this is something i noticed too, they are different somehow, for example i tried to quantize lyriel and deliberate [the one that u can get here] and it didn't work, but the epicrealism one worked. aside from chance i do believe u need to find the diffuser or safetensors of that model u want and reconvert it again with the updated scripts that use new onnx or something.

also ive read u mentioned olive models, if they are olive optimized models im not sure they will work with onnx directml, u should ask around in olive repo

@TheyCallMeHex can u please tell how did u converted the models here so patientx can convert the older models that he wants?