open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework

Home Page:https://mmdeploy.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug] SystemError: Exception escaped from default exception translator! when ONNX Inference

Daanfb opened this issue · comments

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

I have downloaded RTMPose-S static ONNX model from https://platform.openmmlab.com/deploee

I'm trying to do inference with that model with the code shown in the tutorial.
With the same model shown in the tutorial (td-hm_hrnet-w32_8xb64-210e_coco-256x192) my code works, but with RTMPose-S it doesn't work.

The tutorial I'm talking about is this one: https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/04-supported-codebases/mmpose.md

Reproduction

This is my code in Python:

deploy_cfg = 'mmdeploy/configs/mmpose/pose-detection_onnxruntime_static.py'
model_cfg = 'mmpose/configs/body_2d_keypoint/rtmpose/coco/rtmpose-s_8xb256-420e_coco-256x192.py'
device = 'cpu'
backend_model = ['onnx-models/rtmpose-s-static.onnx']
image = 'image.jpg'

def get_image_prediction(deploy_cfg, model_cfg, device, backend_model, image, output_path):

# read deploy_cfg and model_cfg
deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)

# build task and backend model
task_processor = build_task_processor(model_cfg, deploy_cfg, device)
model = task_processor.build_backend_model(backend_model)

# process input image
input_shape = get_input_shape(deploy_cfg)
model_inputs, _ = task_processor.create_input(image, input_shape)

# do model inference
with torch.no_grad():
    result = model.test_step(model_inputs)

# visualize results
task_processor.visualize(
    image=image,
    model=model,
    result=result[0],
    window_name='visualize',
    output_file=output_path)

get_image_prediction(deploy_cfg, model_cfg, device, backend_model, image, 'predictions/output-rtmpose-s-onnx.png')

Environment

03/01 09:42:44 - mmengine - INFO - TorchVision: 0.15.2
03/01 09:42:44 - mmengine - INFO - OpenCV: 4.9.0
03/01 09:42:44 - mmengine - INFO - MMEngine: 0.10.3
03/01 09:42:44 - mmengine - INFO - MMCV: 2.1.0
03/01 09:42:44 - mmengine - INFO - MMCV Compiler: MSVC 192930148
03/01 09:42:44 - mmengine - INFO - MMCV CUDA Compiler: 11.8
03/01 09:42:44 - mmengine - INFO - MMDeploy: 1.3.1+c9389c9
03/01 09:42:44 - mmengine - INFO -

03/01 09:42:44 - mmengine - INFO - **********Backend information**********
03/01 09:42:44 - mmengine - INFO - tensorrt:    None
03/01 09:42:44 - mmengine - INFO - ONNXRuntime: 1.8.1
03/01 09:42:44 - mmengine - INFO - ONNXRuntime-gpu:     None
03/01 09:42:44 - mmengine - INFO - ONNXRuntime custom ops:      NotAvailable
03/01 09:42:44 - mmengine - INFO - pplnn:       None
03/01 09:42:45 - mmengine - INFO - ncnn:        None
03/01 09:42:45 - mmengine - INFO - snpe:        None
03/01 09:42:45 - mmengine - INFO - openvino:    None
03/01 09:42:45 - mmengine - INFO - torchscript: 2.2.1+cu118
03/01 09:42:45 - mmengine - INFO - torchscript custom ops:      NotAvailable
03/01 09:42:45 - mmengine - INFO - rknn-toolkit:        None
03/01 09:42:45 - mmengine - INFO - rknn-toolkit2:       None
03/01 09:42:45 - mmengine - INFO - ascend:      None
03/01 09:42:45 - mmengine - INFO - coreml:      None
03/01 09:42:45 - mmengine - INFO - tvm: None
03/01 09:42:45 - mmengine - INFO - vacc:        None
03/01 09:42:45 - mmengine - INFO -

03/01 09:42:45 - mmengine - INFO - **********Codebase information**********
03/01 09:42:45 - mmengine - INFO - mmdet:       3.2.0
03/01 09:42:45 - mmengine - INFO - mmseg:       None
03/01 09:42:45 - mmengine - INFO - mmpretrain:  None
03/01 09:42:45 - mmengine - INFO - mmocr:       None
03/01 09:42:45 - mmengine - INFO - mmagic:      None
03/01 09:42:45 - mmengine - INFO - mmdet3d:     None
03/01 09:42:45 - mmengine - INFO - mmpose:      1.3.1
03/01 09:42:45 - mmengine - INFO - mmrotate:    None
03/01 09:42:45 - mmengine - INFO - mmaction:    None
03/01 09:42:45 - mmengine - INFO - mmrazor:     None
03/01 09:42:45 - mmengine - INFO - mmyolo:      None

Error traceback

SystemError                               Traceback (most recent call last)
Cell In[12], [line 1](vscode-notebook-cell:?execution_count=12&line=1)
----> [1](vscode-notebook-cell:?execution_count=12&line=1) get_image_prediction(deploy_cfg, model_cfg, device, backend_model, image, 'predicciones/output-rtmpose-m-onnx.png')

Cell In[11], [line 15](vscode-notebook-cell:?execution_count=11&line=15)
     [13](vscode-notebook-cell:?execution_count=11&line=13) # do model inference
     [14](vscode-notebook-cell:?execution_count=11&line=14) with torch.no_grad():
---> [15](vscode-notebook-cell:?execution_count=11&line=15)     result = model.test_step(model_inputs)
     [17](vscode-notebook-cell:?execution_count=11&line=17) # visualize results
     [18](vscode-notebook-cell:?execution_count=11&line=18) task_processor.visualize(
     [19](vscode-notebook-cell:?execution_count=11&line=19)     image=image,
     [20](vscode-notebook-cell:?execution_count=11&line=20)     model=model,
     [21](vscode-notebook-cell:?execution_count=11&line=21)     result=result[0],
     [22](vscode-notebook-cell:?execution_count=11&line=22)     window_name='visualize',
     [23](vscode-notebook-cell:?execution_count=11&line=23)     output_file=output_path)

File [c:\Users\E2K6\anaconda3\envs\mmpose-env\lib\site-packages\mmengine\model\base_model\base_model.py:145](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:145), in BaseModel.test_step(self, data)
    [136](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:136) """``BaseModel`` implements ``test_step`` the same as ``val_step``.
    [137](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:137) 
    [138](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:138) Args:
   (...)
    [142](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:142)     list: The predictions of given data.
    [143](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:143) """
    [144](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:144) data = self.data_preprocessor(data, False)
--> [145](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:145) return self._run_forward(data, mode='predict')

File [c:\Users\E2K6\anaconda3\envs\mmpose-env\lib\site-packages\mmengine\model\base_model\base_model.py:361](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:361), in BaseModel._run_forward(self, data, mode)
    [351](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:351) """Unpacks data for :meth:`forward`
    [352](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:352) 
    [353](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:353) Args:
   (...)
    [358](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:358)     dict or list: Results of training or testing mode.
    [359](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:359) """
    [360](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:360) if isinstance(data, dict):
--> [361](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:361)     results = self(**data, mode=mode)
    [362](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:362) elif isinstance(data, (list, tuple)):
    [363](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/mmengine/model/base_model/base_model.py:363)     results = self(*data, mode=mode)

File [c:\Users\E2K6\anaconda3\envs\mmpose-env\lib\site-packages\torch\nn\modules\module.py:1501](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
   [1496](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1496) # If we don't have any hooks, we want to skip the rest of the logic in
   [1497](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1497) # this function, and just call forward.
   [1498](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1498) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1499](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1499)         or _global_backward_pre_hooks or _global_backward_hooks
   [1500](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1500)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1501](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1501)     return forward_call(*args, **kwargs)
   [1502](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1502) # Do not call functions when jit is used
   [1503](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1503) full_backward_hooks, non_full_backward_hooks = [], []

File [c:\users\e2k6\desktop\daniel\mmpose\mmdeploy\mmdeploy\codebase\mmpose\deploy\pose_detection_model.py:99](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:99), in End2EndModel.forward(self, inputs, data_samples, mode, **kwargs)
     [96](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:96) assert mode == 'predict', \
     [97](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:97)     'Backend model only support mode==predict,' f' but get {mode}'
     [98](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:98) inputs = inputs.contiguous().to(self.device)
---> [99](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:99) batch_outputs = self.wrapper({self.input_name: inputs})
    [100](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:100) batch_outputs = self.wrapper.output_to_list(batch_outputs)
    [102](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/codebase/mmpose/deploy/pose_detection_model.py:102) codebase_cfg = get_codebase_config(self.deploy_cfg)

File [c:\Users\E2K6\anaconda3\envs\mmpose-env\lib\site-packages\torch\nn\modules\module.py:1501](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1501), in Module._call_impl(self, *args, **kwargs)
   [1496](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1496) # If we don't have any hooks, we want to skip the rest of the logic in
   [1497](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1497) # this function, and just call forward.
   [1498](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1498) if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   [1499](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1499)         or _global_backward_pre_hooks or _global_backward_hooks
   [1500](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1500)         or _global_forward_hooks or _global_forward_pre_hooks):
-> [1501](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1501)     return forward_call(*args, **kwargs)
   [1502](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1502) # Do not call functions when jit is used
   [1503](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/torch/nn/modules/module.py:1503) full_backward_hooks, non_full_backward_hooks = [], []

File [c:\users\e2k6\desktop\daniel\mmpose\mmdeploy\mmdeploy\backend\onnxruntime\wrapper.py:108](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:108), in ORTWrapper.forward(self, inputs)
    [106](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:106) if self.device_type == 'cuda':
    [107](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:107)     torch.cuda.synchronize()
--> [108](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:108) self.__ort_execute(self.io_binding)
    [109](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:109) output_list = self.io_binding.copy_outputs_to_cpu()
    [110](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:110) outputs = {}

File [c:\users\e2k6\desktop\daniel\mmpose\mmdeploy\mmdeploy\utils\timer.py:67](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/utils/timer.py:67), in TimeCounter.count_time.<locals>._register.<locals>.fun(*args, **kwargs)
     [64](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/utils/timer.py:64)         torch.cuda.synchronize()
     [65](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/utils/timer.py:65)     start_time = time.perf_counter()
---> [67](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/utils/timer.py:67) result = func(*args, **kwargs)
     [69](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/utils/timer.py:69) if enable:
     [70](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/utils/timer.py:70)     if with_sync and torch.cuda.is_available():

File [c:\users\e2k6\desktop\daniel\mmpose\mmdeploy\mmdeploy\backend\onnxruntime\wrapper.py:126](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:126), in ORTWrapper.__ort_execute(self, io_binding)
    [118](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:118) @TimeCounter.count_time(Backend.ONNXRUNTIME.value)
    [119](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:119) def __ort_execute(self, io_binding: ort.IOBinding):
    [120](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:120)     """Run inference with ONNXRuntime session.
    [121](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:121) 
    [122](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:122)     Args:
    [123](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:123)         io_binding (ort.IOBinding): To bind input/output to a specified
    [124](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:124)             device, e.g. GPU.
    [125](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:125)     """
--> [126](file:///C:/users/e2k6/desktop/daniel/mmpose/mmdeploy/mmdeploy/backend/onnxruntime/wrapper.py:126)     self.sess.run_with_iobinding(io_binding)

File [c:\Users\E2K6\anaconda3\envs\mmpose-env\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py:229](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:229), in Session.run_with_iobinding(self, iobinding, run_options)
    [222](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:222) def run_with_iobinding(self, iobinding, run_options=None):
    [223](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:223)     """
    [224](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:224)      Compute the predictions.
    [225](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:225) 
    [226](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:226)      :param iobinding: the iobinding object that has graph inputs/outputs bind.
    [227](file:///C:/Users/E2K6/anaconda3/envs/mmpose-env/lib/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:227)      :param run_options: See :class:`onnxruntime.RunOptions`.
    228     """
--> 229     self._sess.run_with_iobinding(iobinding._iobinding, run_options)

SystemError: Exception escaped from default exception translator!