hova88 / PointPillars_MultiHead_40FPS

A REAL-TIME 3D detection network [Pointpillars] compiled by CUDA/TensorRT/C++.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ONNX model --> TensorRT model error

chasingw opened this issue · comments

hi, thanks for nice work.
I got some error when I run onnx2trt cbgs_pp_multihead_backbone.onnx -o cbgs_pp_multihead_backbone.trt -b 1 -d 16

(base) ➜  default git:(main) onnx2trt cbgs_pp_multihead_backbone.onnx -o cbgs_pp_multihead_backbone.trt -b 1 -d 16
----------------------------------------------------------------
Input filename:   cbgs_pp_multihead_backbone.onnx
ONNX IR version:  0.0.6
Opset version:    10
Producer name:    pytorch
Producer version: 1.7
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to create object
[1]    19112 abort (core dumped)  onnx2trt cbgs_pp_multihead_backbone.onnx -o cbgs_pp_multihead_backbone.trt -b 1 -d

my onnx2trt version

(base) ➜  default git:(main) onnx2trt -V                                                                     
Parser built against:
  ONNX IR version:  0.0.6
  TensorRT version: 7.1.3

and I found some issue about this. But I'm not found solution. any suggestion about this?
Crash when model with cast #406
run onnx2trt ERROR FAILED_ALLOCATION: std::bad_alloc #549

Which OpenPCDet do you use? hova88/OpenPCDet or open-mmlab/OpenPCDet ?

Which OpenPCDet do you use? hova88/OpenPCDet or open-mmlab/OpenPCDet ?

hi, thanks for your reply. I use the file in README.
cbgs_pp_multihead_pfe.onnx
cbgs_pp_multihead_backbone.onnx

It seems the developer are planning to deprecate onnx2trt in favor of trtexec ,

We are planning to deprecate onnx2trt in favor of trtexec, (see https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec). Can you use the trtexec binary to try and convert your model?
Originally posted by @kevinch-nv in onnx/onnx-tensorrt#549 (comment)

sourcecode:
onnx2trt: https://github.com/onnx/onnx-tensorrt/blob/master/main.cpp
trtexec: https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/trtexec/trtexec.cpp

so I try trtexec
use

trtexec --onnx=cbgs_pp_multihead_backbone.onnx --saveEngine=cbgs_pp_multihead_backbone.trt --device=0 --fp16

replace

onnx2trt cbgs_pp_multihead_backbone.onnx -o cbgs_pp_multihead_backbone.trt -b 1 -d 16

it worked for me, and I will close this issue.

But the pfe.onnx model can be totally parsed?

It seems the developer are planning to deprecate onnx2trt in favor of trtexec, so I try trtexec
so I use

trtexec --onnx=cbgs_pp_multihead_backbone.onnx --saveEngine=cbgs_pp_multihead_backbone.trt --device=0 --fp16

replace

onnx2trt cbgs_pp_multihead_backbone.onnx -o cbgs_pp_multihead_backbone.trt -b 1 -d 16

it worked for me, and I will close this issue.

oh..thank you for your reminder, onnx2trt seems to have no problem with my computer, I will try trtexec later

But the pfe.onnx model can be totally parsed?

It seems the developer are planning to deprecate onnx2trt in favor of trtexec, so I try trtexec
so I use

trtexec --onnx=cbgs_pp_multihead_backbone.onnx --saveEngine=cbgs_pp_multihead_backbone.trt --device=0 --fp16

replace

onnx2trt cbgs_pp_multihead_backbone.onnx -o cbgs_pp_multihead_backbone.trt -b 1 -d 16

it worked for me, and I will close this issue.

oh..thank you for your reminder, onnx2trt seems to have no problem with my computer, I will try trtexec later

pfe.onnx is also worked, at least for convert to trt