There are 4 repositories under sparse-neural-networks topic.
Always sparse. Never dense. But never say never. A Sparse Training repository for the Adaptive Sparse Connectivity concept and its algorithmic instantiation, i.e. Sparse Evolutionary Training, to boost Deep Learning scalability on various aspects (e.g. memory and computational time efficiency, representation and generalization power).
My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated
[ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, Tianlong Chen, Wuyang Chen, Dong Liu, Zhangyang Wang
[ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen, Tianjin Huang, AJAY KUMAR JAISWAL, Zhangyang Wang
Event-based neural networks
[Machine Learning Journal (ECML-PKDD 2022 journal track)] Quick and Robust Feature Selection: the Strength of Energy-efficient Sparse Training for Autoencoders
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
[IJCAI 2022] "Dynamic Sparse Training for Deep Reinforcement Learning" by Ghada Sokar, Elena Mocanu , Decebal Constantin Mocanu, Mykola Pechenizkiy, and Peter Stone.
Code for "Variational Depth Search in ResNets" (https://arxiv.org/abs/2002.02797)
Implementation for the paper "SpaceNet: Make Free Space For Continual Learning" in PyTorch.
[ICLR 2022] "Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently", by Xiaohan Chen, Jason Zhang and Zhangyang Wang.
[TMLR] Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks
PyTorch Implementation of TopKAST
Offical implementation of "Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition" (Neural Networks 2023)
A neural net with a terminal-based testing program.
[Machine Learning Journal (ECML-PKDD 2022 journal track)] A Brain-inspired Algorithm for Training Highly Sparse Neural Networks
Neural Networks with Sparse Weights in Rust using GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
Implementation of artcile "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
[ECAI 2024] Unveiling the Power of Sparse Neural Networks for Feature Selection
[ECML-PKDD 2024] Adaptive Sparsity Level during Training for Efficient Time Series Forecasting with Transformers
This is the repository for the SNN-22 Workshop paper on "Generalization and Memorization in Sparse Neural Networks".
Characterization study repository for pruning, a popular way to compress a DL model. this repo also investigates optimal sparse tensor layouts for pruned nets
Sparse Matrix Library for GPUs, CPUs, and FPGAs via CUDA, OpenCL, and oneAPI
Easily create and optimize PyTorch networks as in the Deep Rewiring paper (https://igi-web.tugraz.at/PDF/241.pdf). Install using 'pip install deep_rewire'
Simple C++ implementation of a sparsely connected multi-layer neural network using OpenMP and CUDA for parallelization.
GPU Computing course project
Neural Network Sparsification via Pruning
Robustness of Sparse Multilayer Perceptrons for Supervised Feature Selection
[NeurIPS 2024 FITML Workshop] This is the official code for the paper "Simultaneous Weight and Architecture Optimization for Neural Networks"
Master's Thesis Project - Lottery Tickets contain independent subnetworks when trained on independent tasks.