marcoslucianops / DeepStream-Yolo-Seg

NVIDIA DeepStream SDK 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 implementation for YOLO-Segmentation models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Segmentation fault (core dumped)

avBuffer opened this issue · comments

1> run:
~/work/yolo_deepstream/DeepStream-Yolo-Seg$ deepstream-app -c deepstream_app_config.txt

2> error logs:
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1487 Deserialize engine failed because file path: /home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine open error
0:00:02.666741042 13265 0x55aeb942c120 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine failed
0:00:02.667629067 13265 0x55aeb942c120 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:02.667649670 13265 0x55aeb942c120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
0:00:58.957847518 13265 0x55aeb942c120 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /home/work/yolo_deepstream/DeepStream-Yolo-Seg/yolov8s-seg.onnx_b1_gpu0_fp32.engine successfully
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT images 3x640x640
1 OUTPUT kFLOAT output1 32x160x160
2 OUTPUT kFLOAT output0 116x8400

0:00:58.974206457 13265 0x55aeb942c120 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/titanx/work/yolo_deepstream/DeepStream-Yolo-Seg/config_infer_primary_yoloV8_seg.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:239>: Pipeline ready

** INFO: <bus_callback:225>: Pipeline running

Segmentation fault (core dumped)