fcakyon / ONNX-Runtime-Inference

ONNX Runtime Inference C++ Example

Home Page:https://leimao.github.io/blog/ONNX-Runtime-CPP-Inference/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ONNX Runtime Inference

Introduction

ONNX Runtime C++ inference example for image classification using CPU and CUDA.

Dependencies

  • CMake 3.16.8
  • ONNX Runtime 1.6.0
  • OpenCV 4.5.0

Usages

Build Docker Image

$ docker build -f docker/onnxruntime-cuda.Dockerfile --no-cache --tag=onnxruntime-cuda:1.6.0 .

Run Docker Container

$ docker run -it --rm --gpus device=0 -v $(pwd):/mnt onnxruntime-cuda:1.6.0

Build Example

$ cmake -B build
$ cmake --build build --config Release --parallel

Run Example

$ cd build/src/
$ ./inference  --use_cpu
Inference Execution Provider: CPU
Number of Input Nodes: 1
Number of Output Nodes: 1
Input Name: data
Input Type: float
Input Dimensions: [1, 3, 224, 224]
Output Name: squeezenet0_flatten0_reshape0
Output Type: float
Output Dimensions: [1, 1000]
Predicted Label ID: 92
Predicted Label: n01828970 bee eater
Uncalibrated Confidence: 0.996137
Minimum Inference Latency: 7.45 ms
$ cd build/src/
$ ./inference  --use_cuda
Inference Execution Provider: CUDA
Number of Input Nodes: 1
Number of Output Nodes: 1
Input Name: data
Input Type: float
Input Dimensions: [1, 3, 224, 224]
Output Name: squeezenet0_flatten0_reshape0
Output Type: float
Output Dimensions: [1, 1000]
Predicted Label ID: 92
Predicted Label: n01828970 bee eater
Uncalibrated Confidence: 0.996137
Minimum Inference Latency: 0.98 ms

References

About

ONNX Runtime Inference C++ Example

https://leimao.github.io/blog/ONNX-Runtime-CPP-Inference/

License:MIT License


Languages

Language:C++ 57.9%Language:Dockerfile 37.3%Language:CMake 4.8%