microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Home Page:https://onnxruntime.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Inference] Could not find an implmentation for groupnorm

neonarc4 opened this issue · comments

Describe the issue

i dont know what going on here cant even run simple thing

To reproduce

from optimum.onnxruntime import ORTStableDiffusionXLPipeline

pipeline = ORTStableDiffusionXLPipeline.from_pretrained("greentree/SDXL-olive-optimized")


2024-05-12 22:48:45.277336: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-05-12 22:48:46.160987: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
WARNING:tensorflow:From Z:\software\python11\Lib\site-packages\tf_keras\src\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.

Traceback (most recent call last):
  File "X:\sad\practice\test\aii\neo.py", line 259, in <module>
    pipeline = ORTStableDiffusionXLPipeline.from_pretrained("greentree/SDXL-olive-optimized")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\software\python11\Lib\site-packages\optimum\onnxruntime\modeling_ort.py", line 669, in from_pretrained
    return super().from_pretrained(
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\software\python11\Lib\site-packages\optimum\modeling_base.py", line 402, in from_pretrained
    return from_pretrained_method(
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\software\python11\Lib\site-packages\optimum\onnxruntime\modeling_diffusion.py", line 337, in _from_pretrained    vae_decoder, text_encoder, unet, vae_encoder, text_encoder_2 = cls.load_model(
                                                                   ^^^^^^^^^^^^^^^
  File "Z:\software\python11\Lib\site-packages\optimum\onnxruntime\modeling_diffusion.py", line 214, in load_model
    vae_decoder = ORTModel.load_model(vae_decoder_path, provider, session_options, provider_options)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "Z:\software\python11\Lib\site-packages\optimum\onnxruntime\modeling_ort.py", line 375, in load_model
    return ort.InferenceSession(
           ^^^^^^^^^^^^^^^^^^^^^
  File "Z:\software\python11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "Z:\software\python11\Lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for GroupNorm(1) node with name 'GroupNorm_0'

### Urgency

i dont know what to say

### ONNX Runtime Installation

Built from Source

### ONNX Runtime Version or Commit ID

1.17.3

### PyTorch Version

2.3.0.dev20240122+cpu

### Execution Provider

Other / Unknown

### Execution Provider Library Version

amd