MIT HAN Lab (mit-han-lab)

MIT HAN Lab

mit-han-lab

Geek Repo

Efficient AI Computing. PI: Song Han

Location:MIT

Home Page:https://hanlab.mit.edu

Twitter:@songhan_mit

Github PK Tool:Github PK Tool

MIT HAN Lab's repositories

streaming-llm

[ICLR 2024] Efficient Streaming Language Models with Attention Sinks

Language:PythonLicense:MITStargazers:6073Issues:59Issues:71

temporal-shift-module

[ICCV 2019] TSM: Temporal Shift Module for Efficient Video Understanding

Language:PythonLicense:MITStargazers:2008Issues:41Issues:218

bevfusion

[ICRA'23] BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation

Language:PythonLicense:Apache-2.0Stargazers:1934Issues:43Issues:580

once-for-all

[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment

Language:PythonLicense:MITStargazers:1830Issues:52Issues:75

llm-awq

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

Language:PythonLicense:MITStargazers:1686Issues:23Issues:130

proxylessnas

[ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware

Language:C++License:MITStargazers:1407Issues:70Issues:0

data-efficient-gans

[NeurIPS 2020] Differentiable Augmentation for Data-Efficient GAN Training

Language:PythonLicense:BSD-2-ClauseStargazers:1256Issues:19Issues:97

efficientvit

EfficientViT is a new family of vision models for efficient high-resolution vision.

Language:PythonLicense:Apache-2.0Stargazers:1206Issues:26Issues:76

torchquantum

A PyTorch-based framework for Quantum Classical Simulation, Quantum Machine Learning, Quantum Neural Networks, Parameterized Quantum Circuits with support for easy deployments on real quantum computers.

Language:Jupyter NotebookLicense:MITStargazers:1160Issues:25Issues:97

gan-compression

[CVPR 2020] GAN Compression: Efficient Architectures for Interactive Conditional GANs

Language:PythonLicense:NOASSERTIONStargazers:1083Issues:31Issues:100

torchsparse

TorchSparse: Efficient Training and Inference Framework for Sparse Convolution on GPUs.

Language:CudaLicense:MITStargazers:1082Issues:18Issues:229

smoothquant

[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

Language:PythonLicense:MITStargazers:966Issues:20Issues:70

anycost-gan

[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing

Language:PythonLicense:MITStargazers:767Issues:23Issues:30

tinyengine

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 256KB Memory

Language:PythonLicense:MITStargazers:701Issues:37Issues:26

fastcomposer

FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

Language:PythonLicense:MITStargazers:576Issues:22Issues:29

spvnas

[ECCV 2020] Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution

Language:PythonLicense:MITStargazers:567Issues:24Issues:99

TinyChatEngine

TinyChatEngine: On-Device LLM Inference Library

Language:C++License:MITStargazers:491Issues:12Issues:28

amc

[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices

Language:PythonLicense:MITStargazers:416Issues:17Issues:25

tiny-training

On-Device Training Under 256KB Memory [NeurIPS'22]

Language:PythonLicense:MITStargazers:388Issues:16Issues:8

mcunet

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning

Language:PythonLicense:MITStargazers:387Issues:22Issues:27

distrifuser

[CVPR 2024] DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models

Language:PythonLicense:MITStargazers:381Issues:8Issues:4

offsite-tuning

Offsite-Tuning: Transfer Learning without Full Model

Language:PythonLicense:MITStargazers:358Issues:8Issues:9

litepose

[CVPR'22] Lite Pose: Efficient Architecture Design for 2D Human Pose Estimation

Language:PythonLicense:MITStargazers:290Issues:23Issues:37

flatformer

[CVPR'23] FlatFormer: Flattened Window Attention for Efficient Point Cloud Transformer

sparsevit

[CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer

Language:PythonLicense:Apache-2.0Stargazers:47Issues:4Issues:1

patch_conv

Patch convolution to avoid large GPU memory usage of Conv2D

Language:PythonLicense:MITStargazers:42Issues:0Issues:0

spatten-llm

[HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning

Language:ScalaLicense:MITStargazers:32Issues:7Issues:1