ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
onui05211 opened this issue · comments
我在互联网搜索解决方法,尝试了很多,但是始终无法解决修改过
model = onnxruntime.InferenceSession(self._model_fp)
onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
请问各位大佬有遇到过这种情况吗?
此问题已经得到解决,根据他的提示,我在recognizer.py
第187行 ,更改model = onnxruntime.InferenceSession(self._model_fp)
为model = onnxruntime.InferenceSession(self._model_fp,providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
在utility.py
157行更改sess = ort.InferenceSession(model_file_path)
为sess = ort.InferenceSession(model_file_path,providers=['TensorrtExecutionProvider,CUDAExecutionProvider','CPUExecutionProvider'])
相同安装步骤,有的PC上没有问题,另一些则报这种错误。请问是什么原因造成?
针对这种情况,会不会对代码作修改?
应该把onnxruntime版本降到 <1.16 就可以。onnxruntime新版接口做了调整,过几天会发布新版cnocr