willer's repositories
yolov5_tensorrt_int8_tools
tensorrt int8 量化yolov5 onnx模型
yolov5_tensorrt_int8
TensorRT int8 量化部署 yolov5s 模型,实测3.3ms一帧!
yolov5_onnx2caffe
yolov5 onnx caffe
yolov5_caffe
yolov5 onnx caffe
RepVGG_TensorRT_int8
RepVGG TensorRT int8 量化,实测推理不到1ms一帧!
EfficientNetv2_TensorRT_int8
EfficientNetv2 TensorRT int8
nanodet_tensorrt_int8
nanodet int8 量化,实测推理2ms一帧!
nanodet_openvino
手把手教你OpenVINO下部署NanoDet模型,intel i7-7700HQ CPU实测6ms一帧
data-science-competition
该仓库用于记录和定期提供各大数据科学竞赛的赛事消息和原创baseline,思路分享以及博主的一些竞赛心得和学习资料等. 主要涵盖:kaggle, 阿里天池,华为云大赛校园赛,百度aistudio,和鲸社区,datafountain等
Deepsort_V2
2020中兴捧月阿尔法赛道多目标检测和跟踪初赛第一名方案
Inpaint-Anything
Inpaint anything using SAM + inpainting models.
kaggle-global-wheat-detection
9th Place Solution of Kaggle Global Wheat Detection
micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
ncnn-android-yolov5
The YOLOv5 object detection android example
Pedestrian-detection-paper-list
mark useful paper
RepVGG_openvino
i7-7700HQ实测10ms一帧,包括前处理和后处理!
sahi
A vision library for performing sliced inference on large images/small objects
tensorRTIntegrate
TensorRT ONNX Plugin、Inference、Compile
Weighted-Boxes-Fusion
Set of methods to ensemble boxes from different object detection models, including implementation of "Weighted boxes fusion (WBF)" method.
yolo_quantization
Based of paper "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"
yolov5-rt-stack
yolort is a runtime stack for yolov5 on specialized accelerators such as libtorch, onnxruntime, tensorrt, tvm and ncnn.