kaiJIN's starred repositories
PaddleSlim
PaddleSlim is an open-source library for deep model compression and architecture search.
cmake-optimize-architecture-flag
CMake module to optimize cflags for architecture extensions such as SSE, AVX
micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
cnn-quantization
Quantization of Convolutional Neural networks.
Ultra-Fast-Lane-Detection
Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
electron-react-boilerplate
A Foundation for Scalable Cross-Platform Apps
Arm-neon-intrinsics
arm neon 相关文档和指令意义
hashing-baseline-for-image-retrieval
:octocat:Various hashing methods for image retrieval and serves as the baselines
google-research
Google Research
Knowledge-Distillation-Zoo
Pytorch implementation of various Knowledge Distillation (KD) methods.
pytorch-metric-learning
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
research-ms-loss
MS-Loss: Multi-Similarity Loss for Deep Metric Learning
bus-segmentation
MATLAB implementation to segment breast lesions in ultrasound images (ICIAR 2016)
UniSIMD-assembler
SIMD macro assembler unified for ARM, MIPS, PPC and x86
PyTorch-VAE
A Collection of Variational Autoencoders (VAE) in PyTorch.
hardware-effects-gpu
Demonstration of various hardware effects on CUDA GPUs.