tonhathuy's starred repositories
memory_profiler
Monitor Memory usage of Python code
mmtracking
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
retinanet-examples
Fast and accurate object detection with end-to-end GPU optimization
learn-to-cluster
Learning to Cluster Faces (CVPR 2019, CVPR 2020)
flexible-yolov5
More readable and flexible yolov5 with more backbone(gcn, resnet, shufflenet, moblienet, efficientnet, hrnet, swin-transformer, etc) and (cbam,dcn and so on), and tensorrt
python_backend
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
nnabla-examples
Neural Network Libraries https://nnabla.org/ - Examples
imagecluster
Cluster images based on image content using a pre-trained deep neural network, optional time distance scaling and hierarchical clustering.
model_navigator
Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.
eye_of_sauron
Stream processing using kafka-python to track people (user input images of target) in the wild over multiple video streams.
yolov4-opencv-cpp-python
Example of using YOLO v4 with OpenCV, C++ and Python
Setup-deeplearning-tools
Set up CI in DL/ cuda/ cudnn/ TensorRT/ onnx2trt/ onnxruntime/ onnxsim/ Pytorch/ Triton-Inference-Server/ Bazel/ Tesseract/ PaddleOCR/ NVIDIA-docker/ minIO/ Supervisord on AGX or PC from scratch.
ssdf-nncore
nncore is a pytorch framework focusing on solving autonomous-driving problems.
faster-bert-as-service
Using TensorRT and Triton Server to build BERT model as a service
nlp-triton-server
event extraction triton server pipeline demo
triton_inference_server_examples
This repository provides the examples of serving model with Triton Inference Severer
DS_python_OD_IS
This poject run Deepstream SDK V 5.0 to inference native Tensorflow using triton inference server. As input we I use an rasberry .py camera and output will be render as RTSP
sentiment_analysis_triton
Triton Inference Server ensemble model for Sentiment Analysis
kubeflow-pipeline-nvidia-example
a kubeflow pipeline example with NVIDIA DL examples and Triton Inference Server
jetson_triton
Jetpack4.5 Triton server