ml6team / deepstream-python

NVIDIA Deepstream 6.1 Python boilerplate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Runing issue while setup config file and put models in Data folder

faridelya opened this issue · comments

Hi hope you doing good.
First and foremost i am kinda new to gstream please give me a road map where i can master in gstream like by watching tutorials yeah for python based gstream or any other good stuff for learning it could help me alot thanks .

Goal

  • First i want to run on mp4 vedio file then i will move towards connecting camera and please show me how to connect to camera?
  • I have done complete enviroment setup and already tested deepstream-python-app samples and deepstream-reference-app samples.

so now i am coming towards the problem.

  1. I put config file for yolov4 in this path == > deepstream-python/deepstream/configs/pgies/ yolov4_saftey.txt
  2. I put all models and related filelike engine file etlt and labels and cache file in ==> /app/data/pgies/yolov4/my_folder/ all-files
  3. I made some changes in core.py
def run_pipeline(video_uri: str):
    pipeline = Pipeline(
        video_uri=video_uri,
        pgie_config_path=os.path.join(CONFIGS_DIR, "pgies/yolov4_saftey.txt"),  # here i made changes  <--------------
        tracker_config_path=os.path.join(CONFIGS_DIR, "trackers/nvdcf.txt"),
        output_format="mp4",
    )
    pipeline.run()

  1. build docker image successfully
  2. when i run this command it gave the following error.
docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py 'file:///app/data/videos/sample_720p.h264'

Error

/Desktop/farid/deepstream-python/deepstream$ docker run -it --gpus all -v ~/deepstream-python/output:/app/output deepstream python3 run.py 'file:///app/data/videos/sample_720p.h264'
INFO:app.pipeline.Pipeline:Playing from URI file:///app/data/videos/sample_720p.h264

(gst-plugin-scanner:7): GStreamer-WARNING **: 05:11:46.981: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so': libtritonserver.so: cannot open shared object file: No such file or directory

(gst-plugin-scanner:7): GStreamer-WARNING **: 05:11:46.983: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_udp.so': librivermax.so.0: cannot open shared object file: No such file or directory
INFO:app.pipeline.Pipeline:Creating Pipeline
INFO:app.pipeline.Pipeline:Creating Source bin
INFO:app.pipeline.Pipeline:Creating URI decode bin
INFO:app.pipeline.Pipeline:Creating Stream mux
INFO:app.pipeline.Pipeline:Creating PGIE
INFO:app.pipeline.Pipeline:Creating Tracker
INFO:app.pipeline.Pipeline:Creating Converter 1
INFO:app.pipeline.Pipeline:Creating Caps filter 1
INFO:app.pipeline.Pipeline:Creating Tiler
INFO:app.pipeline.Pipeline:Creating Converter 2
INFO:app.pipeline.Pipeline:Creating OSD
INFO:app.pipeline.Pipeline:Creating Queue 1
INFO:app.pipeline.Pipeline:Creating Converter 3
INFO:app.pipeline.Pipeline:Creating Caps filter 2
INFO:app.pipeline.Pipeline:Creating Encoder
INFO:app.pipeline.Pipeline:Creating Parser
INFO:app.pipeline.Pipeline:Creating Container
INFO:app.pipeline.Pipeline:Creating Sink
INFO:app.pipeline.Pipeline:Linking elements in the Pipeline: source-bin-00 -> stream-muxer -> primary-inference -> tracker -> convertor1 -> capsfilter1 -> nvtiler -> convertor2 -> onscreendisplay -> queue1 -> mp4-sink-bin
INFO:app.pipeline.Pipeline:Starting pipeline
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:30 Could not open lib: /app/data, error string: /app/data: cannot open shared object file: No such file or directory
[NvMultiObjectTracker] De-initialized
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
Config file path: /app/app/../configs/pgies/yolov4_saftey.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED
INFO:app.pipeline.Pipeline:Exiting pipeline


my config file yolov4_saftey.txt

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1

labelfile-path=/app/data/pgies/yolov4/export_retrain/labels.txt
model-engine-file=/app/data/pgies/yolov4/export_retrain/trt.engine
int8-calib-file=/app/data/pgies/yolov4/export_retrain/cal.bin
tlt-encoded-model = /app/data/pgies/yolov4/yolov4_resnet18_epoch_080.etlt
tlt-model-key=NGpmbHN0ZTNrZHFkOGRxNnFsbW9rbXNxbnU6Yzc5NWM5MjQtZDE1YS00NTYxLTg3YzgtNTU2MWVhNDg1M2M3

infer-dims=3;384;1248
force-implicit-batch-dim=1
maintain-aspect-ratio=1
batch-size=1
network-mode=0
uff-input-order=0
uff-input-blob-name=Input
num-detected-classes=6
interval=0
gie-unique-id=1
network-type=0
cluster-mode=3
process-mode=1
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomBatchedNMSTLT
custom-lib-path=/app/data/pgies/libnvds_infercustomparser_tao.so

[class-attrs-all]
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Please @joxis

HI @cuongh712 and @joxis please have a look .i need only to use my custom trained yolov4 in this app for object detection. so which pipline i should use you can check above settings i have done for my models. please check this out .
Thanks alot

Hi @faridelya, the error indicates that it fails to find the custom inference library, required for yolov4: Config file path: /app/app/../configs/pgies/yolov4_saftey.txt, NvDsInfer Error: NVDSINFER_CUSTOM_LIB_FAILED. Are you sure this file is present inside your container (custom-lib-path=/app/data/pgies/libnvds_infercustomparser_tao.so)?

Thank you @joxis for you kind response.
I solve this problem and i build and run the docker image but i cannot see the inference video when i run docker image .it show output in terminal but with no display.

which pipline we can use for simple object detecion ( yolov4 ) not for bluring object so we can easily modified config and models path ?

One more question : Can we access this app on remote local computer with rstp://localhost-jetson-ip:port/ds-test on VLC.?

again thanks alot for you time.

You can set the output_format to rtsp. This will create an RTSP sink for your output. The URL where you can access the stream will be logged to the console (https://github.com/ml6team/deepstream-python/blob/master/deepstream/app/pipeline.py#L537).

Thank you very much @joxis .