MrLaki5 / TensorRT-onnx-dockerized-inference

Dockerized TensorRT inference engine with ONNX model conversion tool and ResNet50 and Ultraface preprocess and postprocess C++ implementation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TensorRT-onnx-dockerized-inference

  • TensorRT engine inference with ONNX model conversion
  • Dockerized environment with: CUDA 10.2, TensorRT 7, OpenCV 3.4 built with CUDA
  • ResNet50 preprocessing and postprocessing implementation
  • Ultraface preprocessing and postprocessing implementation

Requirements

Build

Pull docker image

  • Pull container image from the repo packages
docker pull ghcr.io/mrlaki5/tensorrt-onnx-dockerized-inference:latest

Build docker image from sources

  • Download TensorRT 7 installation from link
  • Place downloaded TensorRT 7 deb file into root dir of this repo
  • Build
cd ./docker
./build.sh

Run

From the root of the repo start docker container with the command below

./docker/run.sh

ResNet50 inference test

./ResNet50_test
  • Input image
  • Output: Siamese cat, Siamese (confidence: 0.995392)

Ultraface detector inference test

  • Note: for this test, camera device is required. Test will start GUI showing camera stream overlaped with face detections.
./Ultraface_test

About

Dockerized TensorRT inference engine with ONNX model conversion tool and ResNet50 and Ultraface preprocess and postprocess C++ implementation


Languages

Language:C++ 99.4%Language:CMake 0.3%Language:Dockerfile 0.2%Language:Shell 0.0%