Interactions's repositories
sample-odin-configs
Sample configs for setting up Odin locally
sample-odin-pipelines
Some sample pipelines with odin
espnet
End-to-End Speech Processing Toolkit
ggml
Tensor library for machine learning
NeMo
NeMo: a toolkit for conversational AI
NeMo-I
NeMo: a toolkit for conversational AI
onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
riva-asrlib-decoder
Standalone implementation of the CUDA-accelerated WFST Decoder available in Riva
silero-vad
Silero VAD: pre-trained enterprise-grade Voice Activity Detector
TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
triton-client
Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.
triton-server
The Triton Inference Server provides an optimized cloud and edge inferencing solution.