Triton Inference Server (triton-inference-server)

Triton Inference Server

triton-inference-server

Organization data from Github https://github.com/triton-inference-server

Home Page:https://developer.nvidia.com/nvidia-triton-inference-server

GitHub:@triton-inference-server

Triton Inference Server's repositories

server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.

Language:PythonLicense:BSD-3-ClauseStargazers:9995Issues:146Issues:4059

tensorrtllm_backend

The Triton TensorRT-LLM Backend

pytriton

PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.

Language:PythonLicense:Apache-2.0Stargazers:824Issues:17Issues:99

tutorials

This repository contains tutorials and examples for Triton Inference Server

Language:PythonLicense:BSD-3-ClauseStargazers:796Issues:15Issues:0

client

Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

Language:PythonLicense:BSD-3-ClauseStargazers:656Issues:12Issues:68

python_backend

Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.

Language:C++License:BSD-3-ClauseStargazers:654Issues:15Issues:0

model_analyzer

Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

Language:PythonLicense:Apache-2.0Stargazers:495Issues:11Issues:170

backend

Common source, scripts and utilities for creating Triton backends.

Language:C++License:BSD-3-ClauseStargazers:354Issues:13Issues:0
Language:PythonLicense:BSD-3-ClauseStargazers:308Issues:12Issues:0

model_navigator

Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

Language:PythonLicense:Apache-2.0Stargazers:213Issues:9Issues:33

onnxruntime_backend

The Triton backend for the ONNX Runtime.

Language:C++License:BSD-3-ClauseStargazers:165Issues:13Issues:111

pytorch_backend

The Triton backend for the PyTorch TorchScript models.

Language:C++License:BSD-3-ClauseStargazers:164Issues:9Issues:0

core

The core library and APIs implementing the Triton Inference Server.

Language:C++License:BSD-3-ClauseStargazers:155Issues:25Issues:0

dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.

Language:C++License:MITStargazers:138Issues:8Issues:76
Language:PythonLicense:BSD-3-ClauseStargazers:117Issues:16Issues:52

fil_backend

FIL backend for the Triton Inference Server

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:83Issues:19Issues:181

tensorrt_backend

The Triton backend for TensorRT.

Language:C++License:BSD-3-ClauseStargazers:79Issues:10Issues:0

common

Common source, scripts and utilities shared across all Triton repositories.

Language:C++License:BSD-3-ClauseStargazers:77Issues:11Issues:0

triton_cli

Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

tensorflow_backend

The Triton backend for TensorFlow.

Language:C++License:BSD-3-ClauseStargazers:53Issues:9Issues:0

openvino_backend

OpenVINO backend for Triton.

Language:C++License:BSD-3-ClauseStargazers:34Issues:9Issues:6

redis_cache

TRITONCACHE implementation of a Redis cache

Language:C++License:BSD-3-ClauseStargazers:16Issues:4Issues:4

checksum_repository_agent

The Triton repository agent that verifies model checksums.

Language:C++License:BSD-3-ClauseStargazers:11Issues:9Issues:0

identity_backend

Example Triton backend that demonstrates most of the Triton Backend API.

Language:C++License:BSD-3-ClauseStargazers:7Issues:9Issues:0

repeat_backend

An example Triton backend that demonstrates sending zero, one, or multiple responses for each request.

Language:C++License:BSD-3-ClauseStargazers:7Issues:8Issues:0

third_party

Third-party source packages that are modified for use in Triton.

Language:CLicense:BSD-3-ClauseStargazers:7Issues:8Issues:0

local_cache

Implementation of a local in-memory cache for Triton Inference Server's TRITONCACHE API

Language:C++License:BSD-3-ClauseStargazers:6Issues:6Issues:1

square_backend

Simple Triton backend used for testing.

Language:C++License:BSD-3-ClauseStargazers:3Issues:8Issues:0

.github

Community health files for NVIDIA Triton