thatbrguy / Deep-Stream-ONNX

How to deploy ONNX models using DeepStream on Jetson Nano

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep-Stream-ONNX

How to deploy ONNX models using DeepStream on Jetson Nano. [Blog] [Performance]

This repository provides complementary material to this blog post about deploying an ONNX object detection model using the DeepStream SDK on Jetson Nano. Various experiments were designed to test the features and performance of DeepStream.

NOTE (May 2021): There might be an issue with respect to the confidence score. Please check issue (#13) for more information. Will post an update here when it is resolved. There was an issue with the calculation of the objectess score and class probability (missed applying sigmoid to objectness and softmax to the conditional class confidence). PR #14 fixes this issue and the code should work well. However, I have not tried running the code yet so do let me know if you face any issues. The linked performance video shows performance of the older codebase without this fix. Sorry for the inconvenience.

Setup

Step 1: Setting up Jetson Nano and DeepStream.

  • Follow the instructions in the blog to setup your Jetson Nano and to install the DeepStream SDK.

Step 2: Clone this repository.

  • Use the bellow commands to clone and move into the repository.
git clone https://github.com/thatbrguy/Deep-Stream-ONNX.git
cd Deep-Stream-ONNX

Step 3: Download the Tiny YOLOv2 ONNX model.

  • Download the Tiny YOLOv2 ONNX model from the ONNX Model Zoo. We used this model in our experiments.

Step 4: Compiling the custom bounding box parser.

  • A custom bounding box parser function is written in nvdsparsebbox_tiny_yolo.cpp inside the custom_bbox_parser folder.
  • A Makefile is configured to compile the custom bounding box parsing function into a shared library (.so) file. It is also available inside the same folder.
  • The below variables may need to be set by the user in the Makefile before compiling:
# Set the CUDA version.
CUDA_VER:=10 
# Name of the file with the custom bounding box parser function.
SRCFILES:=nvdsparsebbox_tiny_yolo.cpp
# Name of the shared library file to be created after compilation.
TARGET_LIB:=libnvdsinfer_custom_bbox_tiny_yolo.so
# Path to the DeepStream SDK. REPLACE /path/to with the location in your Jetson Nano.
DEEPSTREAM_PATH:=/path/to/deepstream_sdk_v4.0_jetson

Note: If no changes were made to the code by the user, and the blog was followed to set up Jetson Nano and DeepStream, then only the DEEPSTREAM_PATH variable may need to be set before compilation. Default values can be used for the other three variables.

  • Once the variables are set, save the Makefile. Compile the custom bounding box parsing function using: make -C custom_bbox_parser.

Step 5: Launching DeepStream.

  • Download the sample.tar.gz from this drive link. Extract the vids directory into the Deep-Stream-ONNX directory.
  • You can launch DeepStream using the following command:
deepstream-app -c ./config/deepstream_app_custom_yolo.txt
  • You can edit the config files inside the config to alter various settings. You can refer to the blog for resources on understanding the various properties inside the config files.

Notes

  • Methods for quickly verifying if an ONNX model will be accepted by DeepStream (v4.0):
    • Check if the opset version used is <= 9.
    • You can use onnx2trt to convert an ONNX file into a .trt file. I have noticed that if this conversion works, then DeepStream tends to accept the ONNX file. You can refer to the FAQ section for tips on setting up onnx2trt on the Jetson Nano.

About

How to deploy ONNX models using DeepStream on Jetson Nano

License:MIT License


Languages

Language:C++ 96.9%Language:Makefile 3.1%