Open deep learning compiler stack for cpu, gpu and specialized accelerators
10x faster matrix and vector operations.
Incredible acceleration with pruning or the other compression techniques
Code release for "Adversarial Robustness vs Model Compression, or Both?"
Summary, Code for Deep Neural Network Quantization
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
pytorch AutoSlim tools，支持三行代码对pytorch模型进行剪枝压缩
My LeetCode Solutions with Explanation and Time Complexity Analysis
A pytorch toolkit for structured neural network pruning automatically
Foreign language reading and translation assistant based on copy and translate.
Official electron build of diagrams.net
Awesome Artificial Intelligence Projects
LaTeX Thesis Template for Southeast University
Collecting some papers about NN model compression
Prune DNN using Alternating Direction Method of Multipliers (ADMM)
Awesome Neural Logic and Causality: MLN, NLRL, NLM, etc. 因果推断，神经逻辑，强人工智能逻辑推理前沿领域。
Collection of recent methods on DNN compression and acceleration
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
PENNI: Pruned Kernel Sharing for Efficient CNN Inference
Automated deep learning algorithms implemented in PyTorch.
PyTorch image models, scripts, pretrained weights -- (SE)ResNet/ResNeXT, DPN, EfficientNet, MixNet, MobileNet-V3/V2, MNASNet, Single-Path NAS, FBNet, and more