There are 0 repository under tensorrt-conversion topic.
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
InsightFace REST API for easy deployment of face recognition services with TensorRT in Docker.
Yolov5 TensorRT Implementations
this is a tensorrt version unet, inspired by tensorrtx
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
Using TensorRT for Inference Model Deployment.
Simple tool for PyTorch >> ONNX >> TensorRT conversion
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
Tools for Nvidia Jetson Nano, TX2, Xavier.
Convenient Convert CRAFT Text detection pretrain Pytorch model into TensorRT engine directly, without ONNX step between
Dockerized TensorRT inference engine with ONNX model conversion tool and ResNet50 preprocess and postprocess C++ implementation
Export (from Onnx) and Inference TensorRT engine with Python
TensorRT optimises any Deep Learning model by not only making it lightweight but also by accelerating its inference speed with an idea to extract every ounce of performance from the model, making it perfect to be deployed at the edge. This repository helps you convert any Deep Learning model from TensorFlow to TensorRT!
Jetson TX2 compatible TensorFlow's ssd_mobilenet_v2_coco for TensorRT 6 / JetPack 4.3
This project is a notebook of learning TensorRT.
Experimenting with Cifar-10 dataset to understand and implement various Deep Learning Techniques and CNN Architectures for Image Classification.
Convert popular Deep learning models to TensorRT using C++ API (preferably)
TensorRT implementation with Tensorflow 2