Parameter-Efficient Masking Networks |
NeurIPS |
W |
PyTorch(Author) |
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach |
NeurIPS |
W |
PyTorch(Author) |
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing |
NeurIPS |
W |
PyTorch(Author) |
Models Out of Line: A Fourier Lens on Distribution Shift Robustness |
NeurIPS |
W |
PyTorch(Author) |
Robust Binary Models by Pruning Randomly-initialized Networks |
NeurIPS |
W |
PyTorch(Author) |
Rare Gems: Finding Lottery Tickets at Initialization |
NeurIPS |
W |
PyTorch(Author) |
Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning |
NeurIPS |
W |
PyTorch(Author) |
Pruning’s Effect on Generalization Through the Lens of Training and Regularization |
NeurIPS |
W |
- |
Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropagation |
NeurIPS |
W |
PyTorch(Author) |
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective |
NeurIPS |
W |
- |
Sparse Winning Tickets are Data-Efficient Image Recognizers |
NeurIPS |
W |
PyTorch(Author) |
Lottery Tickets on a Data Diet: Finding Initializations with Sparse Trainable Networks |
NeurIPS |
W |
- |
Weighted Mutual Learning with Diversity-Driven Model Compression |
NeurIPS |
F |
- |
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance |
NeurIPS |
F |
- |
Data-Efficient Structured Pruning via Submodular Optimization |
NeurIPS |
F |
PyTorch(Author) |
Structural Pruning via Latency-Saliency Knapsack |
NeurIPS |
F |
PyTorch(Author) |
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm |
NeurIPS |
WF |
- |
Pruning Neural Networks via Coresets and Convex Geometry: Towards No Assumptions |
NeurIPS |
WF |
- |
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints |
NeurIPS |
WF |
PyTorch(Author) |
Advancing Model Pruning via Bi-level Optimization |
NeurIPS |
WF |
PyTorch(Author) |
Emergence of Hierarchical Layers in a Single Sheet of Self-Organizing Spiking Neurons |
NeurIPS |
S |
- |
CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference |
NeurIPS |
S |
PyTorch(Author)(Releasing) |
Transform Once: Efficient Operator Learning in Frequency Domain |
NeurIPS |
Other |
PyTorch(Author)(Releasing) |
Most Activation Functions Can Win the Lottery Without Excessive Depth |
NeurIPS |
Other |
PyTorch(Author) |
Pruning has a disparate impact on model accuracy |
NeurIPS |
Other |
- |
Model Preserving Compression for Neural Networks |
NeurIPS |
Other |
PyTorch(Author) |
Prune Your Model Before Distill It |
ECCV |
W |
PyTorch(Author) |
FedLTN: Federated Learning for Sparse and Personalized Lottery Ticket Networks |
ECCV |
W |
- |
FairGRAPE: Fairness-Aware GRAdient Pruning mEthod for Face Attribute Classification |
ECCV |
F |
PyTorch(Author) |
SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning |
ECCV |
F |
PyTorch(Author) |
Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning |
ECCV |
F |
PyTorch(Author) |
CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution |
ECCV |
F |
PyTorch(Author) |
Soft Masking for Cost-Constrained Channel Pruning |
ECCV |
F |
PyTorch(Author) |
Filter Pruning via Feature Discrimination in Deep Neural Networks |
ECCV |
F |
- |
Disentangled Differentiable Network Pruning |
ECCV |
F |
- |
Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps |
ECCV |
F |
PyTorch(Author) |
Bayesian Optimization with Clustering and Rollback for CNN Auto Pruning |
ECCV |
F |
PyTorch(Author) |
Multi-granularity Pruning for Model Acceleration on Mobile Devices |
ECCV |
WF |
- |
Exploring Lottery Ticket Hypothesis in Spiking Neural Networks |
ECCV |
S |
PyTorch(Author) |
Towards Ultra Low Latency Spiking Neural Networks for Vision and Sequential Tasks Using Temporal Pruning |
ECCV |
S |
- |
Recent Advances on Neural Network Pruning at Initialization |
IJCAI |
W |
PyTorch(Author) |
FedDUAP: Federated Learning with Dynamic Update and Adaptive Pruning Using Shared Data on the Server |
IJCAI |
F |
- |
On the Channel Pruning using Graph Convolution Network for Convolutional Neural Network Acceleration |
IJCAI |
F |
- |
Pruning-as-Search: Efficient Neural Architecture Search via Channel Pruning and Structural Reparameterization |
IJCAI |
F |
- |
Neural Network Pruning by Cooperative Coevolution |
IJCAI |
F |
- |
SPDY: Accurate Pruning with Speedup Guarantees |
ICML |
W |
PyTorch(Author) |
Sparse Double Descent: Where Network Pruning Aggravates Overfitting |
ICML |
W |
PyTorch(Author) |
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks |
ICML |
W |
PyTorch(Author) |
Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness |
ICML |
F |
PyTorch(Author) |
Winning the Lottery Ahead of Time: Efficient Early Network Pruning |
ICML |
F |
PyTorch(Author) |
Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning |
ICML |
F |
PyTorch(Author) |
Fast Lossless Neural Compression with Integer-Only Discrete Flows |
ICML |
F |
PyTorch(Author) |
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks |
ICML |
Other |
PyTorch(Author) |
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning |
ICML |
Other |
- |
Neural Network Pruning Denoises the Features and Makes Local Connectivity Emerge in Visual Tasks |
ICML |
Other |
PyTorch(Author) |
Interspace Pruning: Using Adaptive Filter Representations To Improve Training of Sparse CNNs |
CVPR |
W |
- |
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network |
CVPR |
W |
- |
When To Prune? A Policy Towards Early Structural Pruning |
CVPR |
F |
- |
Fire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask PredictionFire Together Wire Together: A Dynamic Pruning Approach With Self-Supervised Mask Prediction |
CVPR |
F |
- |
Revisiting Random Channel Pruning for Neural Network Compression |
CVPR |
F |
PyTorch(Author)(Releasing) |
Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning |
CVPR |
F |
- |
DECORE: Deep Compression With Reinforcement Learning |
CVPR |
F |
- |
CHEX: CHannel EXploration for CNN Model Compression |
CVPR |
F |
- |
Compressing Models With Few Samples: Mimicking Then Replacing |
CVPR |
F |
PyTorch(Author)(Releasing) |
Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning |
CVPR |
WF |
- |
DiSparse: Disentangled Sparsification for Multitask Model Compression |
CVPR |
Other |
PyTorch(Author) |
Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, And No Retraining |
ICLR (Spotlight) |
W |
PyTorch(Author) |
On Lottery Tickets and Minimal Task Representations in Deep Reinforcement Learning |
ICLR (Spotlight) |
W |
- |
An Operator Theoretic View On Pruning Deep Neural Networks |
ICLR |
W |
PyTorch(Author) |
Effective Model Sparsification by Scheduled Grow-and-Prune Methods |
ICLR |
W |
PyTorch(Author) |
Signing the Supermask: Keep, Hide, Invert |
ICLR |
W |
- |
How many degrees of freedom do we need to train deep networks: a loss landscape perspective |
ICLR |
W |
PyTorch(Author) |
Dual Lottery Ticket Hypothesis |
ICLR |
W |
PyTorch(Author) |
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently |
ICLR |
W |
PyTorch(Author) |
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training |
ICLR |
W |
PyTorch(Author) |
SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning |
ICLR (Spotlight) |
F |
PyTorch(Author)(Releasing) |
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models |
ICLR (Spotlight) |
F |
PyTorch(Author) |
Revisit Kernel Pruning with Lottery Regulated Grouped Convolutions |
ICLR |
F |
PyTorch(Author) |
Plant 'n' Seek: Can You Find the Winning Ticket? |
ICLR |
F |
PyTorch(Author) |
Proving the Lottery Ticket Hypothesis for Convolutional Neural Networks |
ICLR |
F |
PyTorch(Author) |
On the Existence of Universal Lottery Tickets |
ICLR |
F |
PyTorch(Author) |
Training Structured Neural Networks Through Manifold Identification and Variance Reduction |
ICLR |
F |
PyTorch(Author) |
Learning Efficient Image Super-Resolution Networks via Structure-Regularized Pruning |
ICLR |
F |
PyTorch(Author) |
Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients |
ICLR |
WF |
PyTorch(Author) |
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training |
ICLR |
Other |
PyTorch(Author) |
Prune and Tune Ensembles: Low-Cost Ensemble Learning with Sparse Independent Subnetworks |
AAAI |
W |
- |
Prior Gradient Mask Guided Pruning-Aware Fine-Tuning |
AAAI |
F |
- |
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition |
AAAI |
Other |
- |