xmba15 / onnx_runtime_cpp

small c++ library to quickly deploy models using onnxruntime

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How much time does it take to run each picture of superglue?

intjun opened this issue · comments

commented

It takes me more than 1s to run each picture in rtx3060. Is this normal?ths

@intjun

  • Normally, loading model weights into GPU at the beginning takes a lot of time. This loss is sometimes mistaken as processing time.

Here are the two places where model weights are loaded:
https://github.com/xmba15/onnx_runtime_cpp/blob/master/examples/SuperGlueApp.cpp#L35-L37
and
https://github.com/xmba15/onnx_runtime_cpp/blob/master/examples/SuperGlueApp.cpp#L65-L75
You have to omit time spending on the above two places when benchmarking processing time.

  • The GPU needs time to warm up so when you calculated processing time, you need to run the process multiple times and calculate the average to compensate for this loss in warming up GPU.
    Here is the snippet to calculate the average processing time:
    cv::TickMeter meter;
    int numInferences = 100;

    meter.reset();
    meter.start();
    std::vector<Ort::OrtSessionHandler::DataOutputType> superGlueOrtOutput;
    for (int i = 0; i < numInferences; ++i) {
        superGlueOrtOutput =
            superGlueOsh({imageShapes[0].data(), scores[0].data(), keypoints[0].data(), descriptors[0].data(),
                          imageShapes[1].data(), scores[1].data(), keypoints[1].data(), descriptors[1].data()});
    }
    meter.stop();
    std::cout << "processing: " << meter.getTimeMilli() * 1.0 / numInferences << "[ms]" << std::endl;

Calculating using titan X, I got an average processing time of 92 ms.

@intjun Does this answer your question?

commented

@intjun Does this answer your question?
Yes, thank you. But it may be a problem with the onnxruntime library. Mine isn't that fast. Is your library compiled by yourself?