microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator

Home Page:https://onnxruntime.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Onnxruntime-directml 1.18.0 broken multithreading inference session

Djdefrag opened this issue · comments

Describe the issue

With the new version 1.18 it seems that trying to use different InferenceSession using the same DirectML device, all threads remain stalled without giving any exception or error

To reproduce

Thread 1

 AI_model_loaded = onnx_load(AI_model_path)

AI_model = onnxruntime_inferenceSession(
    path_or_bytes = AI_model_loaded.SerializeToString(), 
    providers =  [('DmlExecutionProvider', {"device_id": "0"})]
)    

onnx_input  = {AI_model.get_inputs()[0].name: image}
onnx_output = AI_model.run(None, onnx_input)[0]

Thread n (where n can be any number)

AI_model_loaded = onnx_load(AI_model_path)

AI_model = onnxruntime_inferenceSession(
    path_or_bytes = AI_model_loaded.SerializeToString(), 
    providers =  [('DmlExecutionProvider', {"device_id": "0"})]
)    

onnx_input  = {AI_model.get_inputs()[0].name: image}
onnx_output = AI_model.run(None, onnx_input)[0]

Urgency

No response

Platform

Windows

OS Version

10

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

1.18.0