Lapland-UAS-Tequ / tequ-setup-triton-inference-server

Configure NVIDIA Triton Inference Server on different platforms. Deploy object detection model in Tensorflow SavedModel format to server. Send images to server for inference with Node-RED. Triton Inference Server HTTP API is used for inference.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Lapland-UAS-Tequ/tequ-setup-triton-inference-server Watchers