open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework

Home Page:https://mmdeploy.readthedocs.io/en/latest/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Bug] assertion/libs/mmdeploy-v1.3.0/csrc/mmdeploy/backend_ops/tensorrt/batched_nms/trt_batched_nms.cpp

hoainamken opened this issue · comments

Checklist

  • I have searched related issues but cannot get the expected help.
  • 2. I have read the FAQ documentation but cannot get the expected help.
  • 3. The bug has not been fixed in the latest version.

Describe the bug

I got this error when converting onnx model to tensorrt model on Jetson Orin Nano device.

[12/27/2023-16:28:11] [I] Starting inference
#assertion/libs/mmdeploy-v1.3.0/csrc/mmdeploy/backend_ops/tensorrt/batched_nms/trt_batched_nms.cpp,103

I have also tried some solutions listed here(set max performance, reduce pre_top_k) however the error still occurs.

I would like to ask if anyone has experienced the same error in the same environment and, if so, how did you solve it?

Reproduction

/usr/src/tensorrt/bin/trtexec \
    --onnx=/models/model_trt_k1000.onnx \
    --saveEngine=/models/model.engine \
    --fp16 --workspace=5000 --verbose

Environment

jetpack 6.0
deepstream 6.4
mmdeploy 1.3.0
TensorRT 8.6.2

Error traceback

No response