Co-Exploring Structured Sparsification and Low-Rank Tensor Decomposition for Compact DNNs |
TNNLS |
2024 |
Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization |
ISCA |
2024 |
Coarse-To-Fine Tensor Trains for Compact Visual Representations ![GitHub Repo stars](https://camo.githubusercontent.com/25595f0df06736bd8745657c4f60f3ce9fb91c4ab48f317c4bb6a5e8c809ebda/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f736562756c6f2f50755454) |
ICML |
2024 |
Position: Tensor Networks are a Valuable Asset for Green AI |
ICML |
2024 |
Compression-aware Training of Neural Networks using Frank-Wolfe ![GitHub Repo stars](https://camo.githubusercontent.com/8682c0b58d0620aa65ec92b1fd478fe15ec84f59cc11c9cddc4270b621bd1427/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f5a49422d494f4c2f636f6d7072657373696f6e2d61776172652d534657) |
Arxiv |
2024 |
Unified Low-rank Compression Framework for Click-through Rate Prediction ![GitHub Repo stars](https://camo.githubusercontent.com/3dfa2a31b9a598a53892227647771ae19c19688f47691430c010dec2a7f8fa6e/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f797568616f3331382f61746f6d69635f666561747572655f6d696d69636b696e67) |
KDD2024 ADS |
2024 |
A Practical Approach for Employing Tensor Train Decomposition in Edge Devices |
International Journal of Parallel Programming |
2024 |
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition |
Arxiv |
2024 |
LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models ![GitHub Repo stars](https://camo.githubusercontent.com/57413cfd408b6ad1967625a02e575ecbe1dca747a849fade83aea02604466b38/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f796966616e7963632f6c6f7265747461) |
NAACL |
2024 |
CoMERA: Computing- and Memory-Efficient Training via Rank-Adaptive Tensor Optimization |
Arxiv |
2024 |
FLoRA: Low-Rank Core Space for N-dimension ![GitHub Repo stars](https://camo.githubusercontent.com/e02146245531bab4e84b1f3c1480e6d403e7d3680eadd3a3d4b20090a5c2fd11/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f534a54552d44656570566973696f6e4c61622f464c6f5241) |
Arxiv |
2024 |
Reduced storage direct tensor ring decomposition for convolutional neural networks compression ![GitHub Repo stars](https://camo.githubusercontent.com/2e3e4b72f9932b0ed70f350b80ce84f7617a3923dae0915f653fce518252c29e/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f6d61746575737a6761626f722f72736474725f636f6d7072657373696f6e) |
Arxiv |
2024 |
Federated Learning Using Coupled Tensor Train Decomposition |
Arxiv |
2024 |
Neural Network Compression Based on Tensor Ring Decomposition |
TNNLS |
2024 |
Enhanced network compression through tensor decompositions and pruning ![GitHub Repo stars](https://camo.githubusercontent.com/b16785a901e1a7c1bf80885ccbc5948afd6698fa9a13c7d305729725cab5e4a8/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f70767469656e39362f4e4f52544f4e) |
TNNLS |
2024 |
Enhancing GAN Performance Through Neural Architecture Search and Tensor Decomposition ![GitHub Repo stars](https://camo.githubusercontent.com/0a401b2b86017f3d89d8ce515ce6a2a22f4311ddbdf0c6776d44a1c73fee1647/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f50726173616e6e6150756c616b75727468692f4d4d442d416476657273617269616c4e4153) |
ICASSP |
2024 |
Deep Convolutional Neural Network Compression Method: Tensor Ring Decomposition with Variational Bayesian Approach |
Neural Processing Letters |
2024 |
Deep Learning Model Compression With Rank Reduction in Tensor Decomposition |
TNNLS |
2023 |
Mixed-TD: Efficient Neural Network Accelerator with Layer-Specific Tensor Decomposition ![GitHub Repo stars](https://camo.githubusercontent.com/09f91bd74d65f6cdfd20317d5e4b7833652c23d83f8829dd1cfd7fa458c4c534/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f59752d5a686577656e2f4d697865642d5444) |
FPL |
2023 |
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search ![GitHub Repo stars](https://camo.githubusercontent.com/1cc48a050cde05e1852699e8d67a3701240a2935b50293ddc1778d8670a0644b/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f59752d5a686577656e2f5356442d4e4153) |
WACV |
2023 |
How Informative is the Approximation Error from Tensor Decomposition for Neural Network Compression? |
ICLR |
2023 |
FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer |
AAAI |
2023 |
Compressing convolutional neural networks with hierarchical Tucker-2 decomposition ![GitHub Repo stars](https://camo.githubusercontent.com/3331c561db41a3b2c64ccf44ec4b2ad63a93c7a7d1ca39e4f27b3ec84e1b6e4d/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f6d61746575737a6761626f722f687432) |
Applied Soft Computing |
2023 |
Tensor shape search for efficient compression of tensorized data and neural networks |
Applied Soft Computing |
2023 |
An effective low-rank compression with a joint rank selection followed by a compression-friendly training |
Neural Networks |
2023 |
Joint matrix decomposition for deep convolutional neural networks compression |
Neurocomputing |
2023 |
Training Acceleration of Low-Rank Decomposed Networks using Sequential Freezing and Rank Quantization |
Arxiv |
2023 |
HODEC: Towards Efficient High-Order DEcomposed Convolutional Neural Networks |
CVPR |
2022 |
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition |
AAAI |
2022 |
Towards Compact Neural Networks via End-to-End Training: A Bayesian Tensor Approach with Automatic Rank Determination ![GitHub Repo stars](https://camo.githubusercontent.com/2cee8799fff15a9902afe4843f620cb49b2c59f29ffedf994d393e5a8ae46354/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f636f6c656861776b696e732f626179657369616e2d74656e736f722d72616e6b2d64657465726d696e6174696f6e) |
SIMODS |
2022 |
Deep neural network compression by Tucker decomposition with nonlinear response |
Knowledge-Based Systems |
2022 |
Nested compression of convolutional neural networks with Tucker-2 decomposition |
IJCNN |
2022 |
PSM-nets: Compressing Neural Networks with Product of Sparse Matrices |
IJCNN |
2022 |
A Design Space Exploration Methodology for Enabling Tensor Train Decomposition in Edge Devices |
SAMOS |
2022 |
Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition ![GitHub Repo stars](https://camo.githubusercontent.com/d1a8fe092a9de059e533d927875b91f9bfb2a7e64255f087cb1fe7c88ecbd801/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f6c756361736c69652f746f7263687072756e65) |
NeurIPS |
2021 |
Deeply Shared Filter Bases for Parameter-Efficient Convolutional Neural Networks ![GitHub Repo stars](https://camo.githubusercontent.com/aceb37e38541639afd2250006e4c89f29ac1ccf59a2b3cc80eb755da336c0c9d/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f73737265676962696c6974792f4e65745f524c32) |
NeurIPS |
2021 |
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework |
CVPR |
2021 |
Deep Convolutional Neural Network Compression via Coupled Tensor Decomposition |
JSTSP |
2021 |
Low-Rank Compression of Neural Nets: Learning the Rank of Each Layer ![GitHub Repo stars](https://camo.githubusercontent.com/4041a70d79032b5d69fee12dc958d97bd452438028b4283f9a34a0a304b60301/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f55434d65726365642d4d4c2f4c432d6d6f64656c2d636f6d7072657373696f6e) |
CVPR |
2020 |
Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality Regularization and Singular Value Sparsification ![GitHub Repo stars](https://camo.githubusercontent.com/1a0d9c0e49b91ff0d457a31ce454321437c7594589be16d140e5ac2c31b3bb9e/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f79616e6768722f5356445f5072756e655f45444c4356) |
CVPRW |
2020 |
ADA-Tucker: Compressing deep neural networks via adaptive dimension adjustment tucker decomposition |
Neural Networks |
2019 |
Learning Filter Basis for Convolutional Neural Network Compression ![GitHub Repo stars](https://camo.githubusercontent.com/05629c001ef8f8efdf6ed60d916c1044587c011aea135302a8ceb1233c51db4d/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f6f66736f756e646f662f6c6561726e696e675f66696c7465725f6261736973) |
ICCV |
2019 |
Compressing Deep Models using Multi Tensor Train Decomposition |
ICCAIS |
2019 |
Compressing Fully Connected Layers using Kronecker Tensor Decomposition |
ICCSNT |
2019 |
Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling ![GitHub Repo stars](https://camo.githubusercontent.com/a6c48d7c0f4da352ea028d695a989665cbe530b8c0c5c849972eee2a1759c004/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f7a75656e6b6f2f414c5246) |
OpenReview |
2019 |
On Compressing Deep Models by Low Rank and Sparse Decomposition |
ECCV |
2018 |
On Compressing Deep Models by Low Rank and Sparse Decomposition |
CVPR |
2017 |
Factorized Convolutional Neural Networks |
ICCVW |
2017 |
Accelerating Very Deep Convolutional Networks for Classification and Detection |
TPAMI |
2016 |
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications |
ICLR |
2016 |
Ultimate tensorization: compressing convolutional and FC layers alike ![GitHub Repo stars](https://camo.githubusercontent.com/ec52833d4b0db99c0c361e9f66a2db4654d9936098bbcf3570012316dd2a5028/68747470733a2f2f696d672e736869656c64732e696f2f6769746875622f73746172732f74696d67617269706f762f54656e736f724e65742d5446) |
NIPSW |
2016 |
Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition |
ICLR |
2015 |
Speeding up Convolutional Neural Networks with Low Rank Expansions |
Arxiv |
2014 |