There are 3 repositories under efficient-neural-networks topic.
EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]
Embedded and mobile deep learning research resources
Code and resources on scalable and efficient Graph Neural Networks (TNNLS 2023)
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
[CVPR'20] ZeroQ: A Novel Zero Shot Quantization Framework
[ICML'21 Oral] I-BERT: Integer-only BERT Quantization
[ECCV 2022] Official implementation of the paper "DeciWatch: A Simple Baseline for 10x Efficient 2D and 3D Pose Estimation"
Reference implementation for Blueprint Separable Convolutions (CVPR 2020)
[KDD'22] Learned Token Pruning for Transformers
[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
[ICLR'21] Neural Pruning via Growing Regularization (PyTorch)
(ICLR 2024, CVPR 2024) SparseFormer
Hypercomplex Neural Networks with PyTorch
[ECCV 2020] Scale Adaptive Network: Learning to Learn Parameterized Classification Networks for Scalable Input Images
[ICASSP'22] Integer-only Zero-shot Quantization for Efficient Speech Recognition
Official pytorch implementation for PSUMNet for efficient skeleton action recognition
Event-based neural networks
Finding Storage- and Compute-Efficient Convolutional Neural Networks
NeurIPS 2019 MicroNet Challenge
DiSK: Distilling Scaffolded Knowledge from Teacher to Student.
Approximating a 3DCNN with a 2DCNN
[NeurIPS'24] How Does Message Passing Improve Collaborative Filtering?
[ICLR 2021] Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Explore image transformations with DeepDream Algorithm and Neural Style Transfer in creative image processing.
ReLU++: A modified ReLU activation function with enhanced performance for deep learning models.
Ensemble Deep Random Vector Functional Link with Skip Connections (edRVFL-SC) No GPU required • 100× faster training
Library for Structured Matrices (approximation methods and structured layers for neural networks)
Product recognition using the Generalized Hough Transform (GHT) and product classification with a tiny and extremely efficient convolutional neural network.
Implementation of MedQ: Lossless ultra-low-bit neural network quantization for medical image segmentation