There are 2 repositories under efficient-deep-learning topic.
[CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
[TMLR 2024] Efficient Large Language Models: A Survey
Collection of recent methods on (deep) neural network compression and acceleration.
Efficient Deep Learning Systems course materials (HSE, YSDA)
Code and resources on scalable and efficient Graph Neural Networks (TNNLS 2023)
[IEEE TPAMI] Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning
📚 Collection of awesome generation acceleration resources.
[NeurIPS 2023] Structural Pruning for Diffusion Models
A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including language and vision, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
[CVPR 2024] Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis
📚 Collection of token-level model compression resources.
[NeurIPS2022] Official implementation of the paper 'Green Hierarchical Vision Transformer for Masked Image Modeling'.
Official implementation of "EAGLES: Efficient Accelerated 3D Gaussians with Lightweight EncodingS"
a curated list of high-quality papers on resource-efficient LLMs 🌱
[NeurIPS 2021] Official codes for "Efficient Training of Visual Transformers with Small Datasets".
[TMLR 2025] Efficient Diffusion Models: A Survey
LauzHack Deep Learning Bootcamp
The best collection of AI tutorials to make you a boss of Data Science!
[CVPR 25] Official Implementation (Pytorch) of "EfficientViM: Efficient Vision Mamba with Hidden State Mixer-based State Space Duality"
[ICLR 2022] Data-Efficient Graph Grammar Learning for Molecular Generation
[ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu
Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)
Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)
Denoising Diffusion Step-aware Models (ICLR2024)
[ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"
[IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.
Official Implementation of MobileUNETR: A Lightweight End-To-End Hybrid Vision Transformer For Efficient Medical Image Segmentation (ECCV2024) (Oral)
Frame Flexible Network (CVPR2023)
Recent Advances on Efficient Vision Transformers
FastCache: Fast Caching for Diffusion Transformer Through Learnable Linear Approximation [Efficient ML Model]
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
[Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
This repository provides implementation of a baseline method and our proposed methods for efficient Skeleton-based Human Action Recognition.
[ECCV 2024 Workshop Best Paper Award] Famba-V: Fast Vision Mamba with Cross-Layer Token Fusion
[ICLR'23] Trainability Preserving Neural Pruning (PyTorch)