Triton Inference Server (triton-inference-server)

Triton Inference Server

triton-inference-server

Geek Repo

Home Page:https://developer.nvidia.com/nvidia-triton-inference-server

Github PK Tool:Github PK Tool

Triton Inference Server's repositories

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Language:PythonLicense:BSD-3-ClauseStargazers:7528Issues:138Issues:3528

pytriton

PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.

Language:PythonLicense:Apache-2.0Stargazers:674Issues:17Issues:68

tensorrtllm_backend

The Triton TensorRT-LLM Backend

Language:PythonLicense:Apache-2.0Stargazers:535Issues:23Issues:381

client

Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

Language:C++License:BSD-3-ClauseStargazers:502Issues:13Issues:8

python_backend

Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.

Language:C++License:BSD-3-ClauseStargazers:488Issues:11Issues:0

tutorials

This repository contains tutorials and examples for Triton Inference Server

Language:PythonLicense:BSD-3-ClauseStargazers:439Issues:12Issues:0

model_analyzer

Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

Language:PythonLicense:Apache-2.0Stargazers:386Issues:13Issues:149

backend

Common source, scripts and utilities for creating Triton backends.

Language:C++License:BSD-3-ClauseStargazers:264Issues:13Issues:0

model_navigator

Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

Language:PythonLicense:Apache-2.0Stargazers:160Issues:10Issues:31
Language:PythonLicense:BSD-3-ClauseStargazers:124Issues:4Issues:0

dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

Language:C++License:MITStargazers:117Issues:9Issues:71

onnxruntime_backend

The Triton backend for the ONNX Runtime.

Language:C++License:BSD-3-ClauseStargazers:114Issues:15Issues:99

pytorch_backend

The Triton backend for the PyTorch TorchScript models.

Language:C++License:BSD-3-ClauseStargazers:107Issues:11Issues:0

core

The core library and APIs implementing the Triton Inference Server.

Language:C++License:BSD-3-ClauseStargazers:93Issues:13Issues:0

fil_backend

FIL backend for the Triton Inference Server

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:65Issues:19Issues:169

common

Common source, scripts and utilities shared across all Triton repositories.

Language:C++License:BSD-3-ClauseStargazers:57Issues:10Issues:0

tensorrt_backend

The Triton backend for TensorRT.

Language:C++License:BSD-3-ClauseStargazers:49Issues:10Issues:0

tensorflow_backend

The Triton backend for TensorFlow.

Language:C++License:BSD-3-ClauseStargazers:39Issues:9Issues:0

openvino_backend

OpenVINO backend for Triton.

Language:C++License:BSD-3-ClauseStargazers:25Issues:9Issues:3

triton_cli

Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

stateful_backend

Triton backend for managing the model state tensors automatically in sequence batcher

Language:C++License:MITStargazers:11Issues:12Issues:0

contrib

Community contributions to Triton that are not officially supported or maintained by the Triton project.

Language:PythonLicense:BSD-3-ClauseStargazers:8Issues:9Issues:0

checksum_repository_agent

The Triton repository agent that verifies model checksums.

Language:C++License:BSD-3-ClauseStargazers:7Issues:8Issues:0

redis_cache

TRITONCACHE implementation of a Redis cache

Language:C++License:BSD-3-ClauseStargazers:7Issues:4Issues:2

third_party

Third-party source packages that are modified for use in Triton.

Language:CLicense:BSD-3-ClauseStargazers:7Issues:8Issues:0

identity_backend

Example Triton backend that demonstrates most of the Triton Backend API.

Language:C++License:BSD-3-ClauseStargazers:6Issues:8Issues:0

repeat_backend

An example Triton backend that demonstrates sending zero, one, or multiple responses for each request.

Language:C++License:BSD-3-ClauseStargazers:5Issues:7Issues:0

local_cache

Implementation of a local in-memory cache for Triton Inference Server's TRITONCACHE API

Language:C++License:BSD-3-ClauseStargazers:2Issues:5Issues:1

square_backend

Simple Triton backend used for testing.

Language:C++License:BSD-3-ClauseStargazers:2Issues:7Issues:0