dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Home Page:https://developer.nvidia.com/embedded/twodaystoademo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Running peoplenet with detectNet on jetPack6

AkshatJain-TerraFirma opened this issue · comments

Hello @dusty-nv

I downloaded peoplnet directly from : https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/models/peoplenet. These are the contents of the downloaded folder :
labels.txt nvinfer_config.txt resnet34_peoplenet_int8.txt resnet34_peoplenet.onnx status.json

When i run the following script :

net = jetson_inference.detectNet(model="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx", labels="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt",
input_blob="input_0", output_cvg="scores", output_bbox="boxes",
threshold=0.8)

I get the following error :

[TRT] 4: [network.cpp::validate::3162] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load /home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx
[TRT] detectNet -- failed to initialize.
Traceback (most recent call last):
File "/home/akshat/terrafirma/v2/operator_station/vehicle_control/detect.py", line 12, in
net = jetson_inference.detectNet(model="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx", labels="/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/labels.txt",
Exception: jetson.inference -- detectNet failed to load network

Is this an issue with the parameters passed into detectNet method or does it need to be optimized to a .engine format? Do i manually have to run the tao converter on the .onnx file ?

(I am a complete beginner so sorry if these questions are silly)

I converted it to .engine file and got the following error :

3: Cannot find binding of given name: input_0
[TRT] failed to find requested input layer input_0 in network
[TRT] device GPU, failed to create resources for CUDA engine
[TRT] failed to load /home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.engine
[TRT] detectNet -- failed to initialize.

Command i run to convert it to a .engine file :
/usr/src/tensorrt/bin/trtexec --onnx=/home/akshat/jetson-inference/data/networks/peoplenet_deployable_quantized_onnx_v2.6.2/resnet34_peoplenet.onnx --saveEngine=resnet34_peoplenet.engine --fp16

so i fixed that error by updating the input_0 and output bindings, but the detection does not work and I get this warning below in every frame and none of the labels are detected

[TRT]    The execute() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use executeV2() instead.
[TRT]    Also, the batchSize argument passed into this function has no effect on changing the input shapes. Please use setBindingDimensions() function to change input shapes instead.

Hi @AkshatJain-TerraFirma, you may need to change this part of jetson-inference/c/detectNet.cpp

else if( IsModelType(MODEL_ONNX) )

It expects that detection ONNX models are made from the pytorch-ssd training scripts in the repo. Meanwhile the TAO peoplenet models normally fall under the MODEL_ENGINE category here:

else if( IsModelType(MODEL_ENGINE) )

So you may need to change that if you are using a different ONNX. For the TAO models, this script uses tao-converter to build the TRT engine, which then jetson-inference can load (but as mentioned in the other issue, had not tried that on JP6)

function tao_to_trt()