marcoslucianops / DeepStream-Yolo-Seg

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Division by 0 error if there's no detections in the frame

rama-animaleyeq opened this issue · comments

I managed to create a yolov8s-seg.onnx_b1_gpu0_fp16.engine model. It works fine when the video starts and there's detections, but if no detection is found in the first frame, then there's "Division by 0 " errors, is there anyway to avoid this ?
Full output is below ..

0:00:03.545107844 48470 0x55eeac34e330 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/home/rama/Documents/code-repo/codecommit/edgeai-deepstream-dev_weight_estimation/edgeai-deepstream/models/yolov8s-seg.onnx_b1_gpu0_fp16.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 5
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 100x4
2 OUTPUT kFLOAT scores 100x1
3 OUTPUT kFLOAT classes 100x1
4 OUTPUT kFLOAT masks 100x160x160

0:00:03.644677843 48470 0x55eeac34e330 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /home/rama/Documents/code-repo/codecommit/edgeai-deepstream-dev_weight_estimation/edgeai-deepstream/models/yolov8s-seg.onnx_b1_gpu0_fp16.engine
0:00:03.649166370 48470 0x55eeac34e330 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/rama/Documents/code-repo/github/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg_coco.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:239>: Pipeline ready

(deepstream-app:48470): GStreamer-WARNING **: 13:05:06.930: (../gst/gstinfo.c:556):gst_debug_log_valist: runtime check failed: (object == NULL || G_IS_OBJECT (object))
** INFO: <bus_callback:225>: Pipeline running

ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1650 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:03.971140577 48470 0x55eeacb92800 WARN nvinfer gstnvinfer.cpp:1388:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1388): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1650 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:03.981255708 48470 0x55eeacb92800 WARN nvinfer gstnvinfer.cpp:1388:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
nvstreammux: Successfully handled EOS for source_id=0
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1650 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:03.994894500 48470 0x55eeacb92800 WARN nvinfer gstnvinfer.cpp:1388:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1650 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.021695221 48470 0x55eeacb92800 WARN nvinfer gstnvinfer.cpp:1388:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: [TRT]: 1: [runner.cpp::shapeChangeHelper::621] Error Code 1: Myelin (Division by 0 detected in the shape graph. Tensor (Divisor) "sp__mye3" is equal to 0.; )
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1650 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.035448646 48470 0x55eeacb92800 WARN nvinfer gstnvinfer.cpp:1388:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1388): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1388): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1388): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1388): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
App run failed

Same error :(

I got the inference to work on a container in an AGX Orin running nvcr.io/nvidia/deepstream-l4t:6.2-base , which has Tensorrt 8.5.2. So the error is probably due to the Tensorrt 8.6.1.6 on my desktop