NVIDIA-AI-IOT / deepstream_triton_model_deploy

How to deploy open source models using DeepStream and Triton Inference Server

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

------------------------------------------------------

This sample application is no longer maintained

------------------------------------------------------

Deploying an open source model using NVIDIA DeepStream and Triton Inference Server

This repository contains contains the the code and configuration files required to deploy sample open source models video analytics using Triton Inference Server and DeepStream SDK 5.0.

Getting Started

Prerequisites:

DeepStream SDK 5.0 or use docker image (nvcr.io/nvidia/deepstream:5.0.1-20.09-triton) for x86 and (nvcr.io/nvidia/deepstream-l4t:5.0-20.07-samples) for NVIDIA Jetson.

The following models have been deployed on DeepStream using Triton Inference Server.

For further details, please see each project's README.

TensorFlow Faster RCNN Inception V2 : README

The project shows how to deploy TensorFlow Faster RCNN Inception V2 network trained on MSCOCO dataset for object detection. faster_rcnn_output

ONNX CenterFace : README

The project shows how to deploy ONNX CenterFace network for face detection and alignment. centerface_output

Additional resources:

Developer blog: Building Intelligent Video Analytics Apps Using NVIDIA DeepStream 5.0

Learn more about Triton Inference Server

Post your questions or feedback in the DeepStream SDK developer forums

About

How to deploy open source models using DeepStream and Triton Inference Server

License:Apache License 2.0


Languages

Language:C++ 46.5%Language:Python 41.0%Language:Makefile 6.9%Language:Shell 5.6%