xmba15 / onnx_runtime_cpp

small c++ library to quickly deploy models using onnxruntime

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

This problem occurs when I use tensorrt as the backend run superglue. Please give me some advice! Thank you.

intjun opened this issue · comments

commented

./super_glue super_point.onnx super_glue.onnx 1.png 2.png

if (m_gpuIdx.has_value()) {
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_Tensorrt(sessionOptions, m_gpuIdx.value()));
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_CUDA(sessionOptions, m_gpuIdx.value()));

terminate called after throwing an instance of 'Ort::Exception'
what(): Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/tensorrt/tensorrt_execution_provider.cc:798 SubGraphCollection_t onnxruntime::TensorrtExecutionProvider::GetSupportedList(SubGraphCollection_t, int, int, const onnxruntime::GraphViewer&, bool*) const [ONNXRuntimeError] : 1 : FAIL : TensorRT input: onnx::Where_3463 has no shape specified. Please run shape inference on the onnx model first. Details can be found in https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#shape-inference-for-tensorrt-subgraphs

Currently the shapes of input for superglue are dynamic as you can see in here:
https://github.com/xmba15/onnx_runtime_cpp/blob/master/scripts/superglue/convert_to_onnx.py#L46-L55

    torch.onnx.export(
        model,
        data,
        "super_glue.onnx",
        export_params=True,
        opset_version=12,
        do_constant_folding=True,
        input_names=list(data.keys()),
        output_names=["matches0", "matches1", "matching_scores0", "matching_scores1"],
        dynamic_axes={
            "keypoints0": {0: "batch_size", 1: "num_keypoints0"},
            "scores0": {0: "batch_size", 1: "num_keypoints0"},
            "descriptors0": {0: "batch_size", 2: "num_keypoints0"},
            "keypoints1": {0: "batch_size", 1: "num_keypoints1"},
            "scores1": {0: "batch_size", 1: "num_keypoints1"},
            "descriptors1": {0: "batch_size", 2: "num_keypoints1"},
            "matches0": {0: "batch_size", 1: "num_keypoints0"},
            "matches1": {0: "batch_size", 1: "num_keypoints1"},
            "matching_scores0": {0: "batch_size", 1: "num_keypoints0"},
            "matching_scores1": {0: "batch_size", 1: "num_keypoints1"},
        },
    )

To use TensorRT backend, you have to fix this num_keypoints* during converting to onnx format.
You also need to provide that fixed number of keypoints in C++ code either by omitting keypoints if you have too many, or add more dummy keypoints if you do not have enough.
https://github.com/xmba15/onnx_runtime_cpp/blob/master/examples/SuperGlueApp.cpp#L64-L88

I think this is the only method to work around the problem as TensorRT does not allow dynamic shape.

commented

Thank you very much for your advice!