marcoslucianops / DeepStream-Yolo-Seg

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Missing RoiAlign TRT plugin in TRT 8.4.1

JeroendenBoef opened this issue · comments

Thanks for publishing your instance segmentation implementation, greatly appreciated!

My setup:

  • Device: Jetson AGX Xavier
  • Jetpack 5.0.2
  • TRT: 8.4.1
  • Docker image: nvcr.io/nvidia/deepstream-l4t:6.1.1-samples
  • Model: yolov8

Generating a TRT engine on a Jetson AGX Xavier with deepstream 6.1.1 causes an error due to the RoiAlign plugin not being implemented until TRT 8.5.1. Due to a custom L4T kernel, I cannot switch to Jetpack 5.1.

Have you manage to get it to work on Jetpack 5.0.2? If not, would you have any pointers on a workaround? I was looking into grabbing NVIDIAs RoiAlign and adding it as a custom plugin to TRT 8.4.1, however I am unsure whether I have everything I need for that with just this snippet.

The full error trace:

:00:01.242304077    48     0x120b0b60 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:367: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
ERROR: [TRT]: ModelImporter.cpp:773: While parsing node number 340 [RoiAlign -> "/1/RoiAlign_output_0"]:
ERROR: [TRT]: ModelImporter.cpp:774: --- Begin node ---
ERROR: [TRT]: ModelImporter.cpp:775: input: "/0/model.22/proto/cv3/act/Mul_output_0"
input: "/1/Reshape_1_output_0"
input: "/1/Gather_5_output_0"
output: "/1/RoiAlign_output_0"
name: "/1/RoiAlign"
op_type: "RoiAlign"
attribute {
  name: "coordinate_transformation_mode"
  s: "half_pixel"
  type: STRING
}
attribute {
  name: "mode"
  s: "avg"
  type: STRING
}
attribute {
  name: "output_height"
  i: 160
  type: INT
}
attribute {
  name: "output_width"
  i: 160
  type: INT
}
attribute {
  name: "sampling_ratio"
  i: 0
  type: INT
}
attribute {
  name: "spatial_scale"
  f: 0.25
  type: FLOAT
}

ERROR: [TRT]: ModelImporter.cpp:776: --- End node ---
ERROR: [TRT]: ModelImporter.cpp:778: ERROR: builtin_op_importers.cpp:4890 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.
0:00:09.646492322    48     0x120b0b60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1943> [UID = 1]: build engine file failed
0:00:09.848985441    48     0x120b0b60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2029> [UID = 1]: build backend context failed
0:00:09.849393816    48     0x120b0b60 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1266> [UID = 1]: generate backend failed, check config file settings
0:00:09.852251066    48     0x120b0b60 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Failed to create NvDsInferContext instance
0:00:09.852458566    48     0x120b0b60 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-inference> error: Config file path: yolo_luffing_test.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: yolo_test.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

Hi, I will do some tests and update the code soon.

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

You can backport the TRT plugin from 8.5.1 or higher to your current TRT version by compiling it as a standalone plugin, registering it with TRT as custom plugin, and loading the custom lib at runtime. If you use python bindings, you can load it at runtime with:

import ctypes
ctypes.CDLL("/path/to/custom/plugin/libRoiAlignPlugin.so", mode=ctypes.RTLD_GLOBAL)

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

You can backport the TRT plugin from 8.5.1 or higher to your current TRT version by compiling it as a standalone plugin, registering it with TRT as custom plugin, and loading the custom lib at runtime. If you use python bindings, you can load it at runtime with:

import ctypes
ctypes.CDLL("/path/to/custom/plugin/libRoiAlignPlugin.so", mode=ctypes.RTLD_GLOBAL)

Do you have solve the problem?

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

You can backport the TRT plugin from 8.5.1 or higher to your current TRT version by compiling it as a standalone plugin, registering it with TRT as custom plugin, and loading the custom lib at runtime. If you use python bindings, you can load it at runtime with:

import ctypes
ctypes.CDLL("/path/to/custom/plugin/libRoiAlignPlugin.so", mode=ctypes.RTLD_GLOBAL)

Do you have solve the problem?

Compiling the RoiAlignPlugin as standalone TRT plugin and importing it like this does work and it would allow you to run the this instance segmentation app on Jetpack versions lower than 5.1. We were experiencing some drops on throughput on our setup, which in the end was likely due to bad industrial Jetson boards from our hardware supplier. We have since switched suppliers and now have access to the higher Jetpack version, so we haven't tested the backport setup to see if the performance issues were not related to this.

I would recommend upgrading to Jetpack 5.1 or higher if possible. If this is not an option and it would still be helpful to people, I could give some instructions on how to compile RoiAlign as a standalone plugin and import it in your deepstream app. Alternatively, I could make a PR with the guide so that it is included in the repo @marcoslucianops , whichever you prefer.

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

You can backport the TRT plugin from 8.5.1 or higher to your current TRT version by compiling it as a standalone plugin, registering it with TRT as custom plugin, and loading the custom lib at runtime. If you use python bindings, you can load it at runtime with:

import ctypes
ctypes.CDLL("/path/to/custom/plugin/libRoiAlignPlugin.so", mode=ctypes.RTLD_GLOBAL)

Do you have solve the problem?

Compiling the RoiAlignPlugin as standalone TRT plugin and importing it like this does work and it would allow you to run the this instance segmentation app on Jetpack versions lower than 5.1. We were experiencing some drops on throughput on our setup, which in the end was likely due to bad industrial Jetson boards from our hardware supplier. We have since switched suppliers and now have access to the higher Jetpack version, so we haven't tested the backport setup to see if the performance issues were not related to this.

I would recommend upgrading to Jetpack 5.1 or higher if possible. If this is not an option and it would still be helpful to people, I could give some instructions on how to compile RoiAlign as a standalone plugin and import it in your deepstream app. Alternatively, I could make a PR with the guide so that it is included in the repo @marcoslucianops , whichever you prefer.

If i upgrade to Jetpack 5.1, does it work?

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

You can backport the TRT plugin from 8.5.1 or higher to your current TRT version by compiling it as a standalone plugin, registering it with TRT as custom plugin, and loading the custom lib at runtime. If you use python bindings, you can load it at runtime with:

import ctypes
ctypes.CDLL("/path/to/custom/plugin/libRoiAlignPlugin.so", mode=ctypes.RTLD_GLOBAL)

Do you have solve the problem?

Compiling the RoiAlignPlugin as standalone TRT plugin and importing it like this does work and it would allow you to run the this instance segmentation app on Jetpack versions lower than 5.1. We were experiencing some drops on throughput on our setup, which in the end was likely due to bad industrial Jetson boards from our hardware supplier. We have since switched suppliers and now have access to the higher Jetpack version, so we haven't tested the backport setup to see if the performance issues were not related to this.
I would recommend upgrading to Jetpack 5.1 or higher if possible. If this is not an option and it would still be helpful to people, I could give some instructions on how to compile RoiAlign as a standalone plugin and import it in your deepstream app. Alternatively, I could make a PR with the guide so that it is included in the repo @marcoslucianops , whichever you prefer.

If i upgrade to Jetpack 5.1, does it work?

Jetpack 5.1 and its minor versions (5.1.1 and 5.1.2) come with TensorRT 8.5.2, which has the RoiAlign plugin, so it should work on Jetpack 5.1.

I am working on Jetson Tx2,DeepStream 6.0. I am too getting the same error as above.

You can backport the TRT plugin from 8.5.1 or higher to your current TRT version by compiling it as a standalone plugin, registering it with TRT as custom plugin, and loading the custom lib at runtime. If you use python bindings, you can load it at runtime with:

import ctypes
ctypes.CDLL("/path/to/custom/plugin/libRoiAlignPlugin.so", mode=ctypes.RTLD_GLOBAL)

Do you have solve the problem?

Compiling the RoiAlignPlugin as standalone TRT plugin and importing it like this does work and it would allow you to run the this instance segmentation app on Jetpack versions lower than 5.1. We were experiencing some drops on throughput on our setup, which in the end was likely due to bad industrial Jetson boards from our hardware supplier. We have since switched suppliers and now have access to the higher Jetpack version, so we haven't tested the backport setup to see if the performance issues were not related to this.
I would recommend upgrading to Jetpack 5.1 or higher if possible. If this is not an option and it would still be helpful to people, I could give some instructions on how to compile RoiAlign as a standalone plugin and import it in your deepstream app. Alternatively, I could make a PR with the guide so that it is included in the repo @marcoslucianops , whichever you prefer.

If i upgrade to Jetpack 5.1, does it work?

Jetpack 5.1 and its minor versions (5.1.1 and 5.1.2) come with TensorRT 8.5.2, which has the RoiAlign plugin, so it should work on Jetpack 5.1.

Yes, you are right, bro!
It can work as yolo as well,and i want to ask another question: how to change the color of the mask and how to put the text in the middle of the mask?