Jackch-NV's starred repositories

TensorRT-LLM

TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.

Language:C++License:Apache-2.0Stargazers:8444Issues:0Issues:0

tensorrtllm_backend

The Triton TensorRT-LLM Backend

Language:PythonLicense:Apache-2.0Stargazers:685Issues:0Issues:0

CV-CUDA

CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.

Language:C++License:NOASSERTIONStargazers:2360Issues:0Issues:0

torchnvjpeg

Decode JPEG image on GPU using PyTorch

Language:C++License:MITStargazers:83Issues:0Issues:0

FasterTransformer

Transformer related optimization, including BERT, GPT

Language:C++License:Apache-2.0Stargazers:5830Issues:0Issues:0

trt-samples-for-hackathon-cn

Simple samples for TensorRT programming

Language:PythonLicense:Apache-2.0Stargazers:1495Issues:0Issues:0

TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.

Language:C++License:Apache-2.0Stargazers:10684Issues:0Issues:0