EIC@GaTech (GATECH-EIC)

EIC@GaTech

GATECH-EIC

Geek Repo

Efficient and Intelligent Computing Lab

Location:United States of America

Home Page:https://eiclab.scs.gatech.edu/

Github PK Tool:Github PK Tool

EIC@GaTech's repositories

HW-NAS-Bench

[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

Language:PythonLicense:MITStargazers:100Issues:6Issues:12

ViTCoD

[HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design

Language:PythonLicense:Apache-2.0Stargazers:84Issues:3Issues:3

BNS-GCN

[MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling" by Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin

Language:PythonLicense:MITStargazers:49Issues:4Issues:5

ShiftAddLLM

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization

Language:PythonLicense:Apache-2.0Stargazers:41Issues:2Issues:1

DepthShrinker

[ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan Fu, Haichuan Yang, Jiayi Yuan, Meng Li, Cheng Wan, Raghuraman Krishnamoorthi, Vikas Chandra, Yingyan Lin

GCoD

[HPCA 2022] GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design

Language:PythonLicense:Apache-2.0Stargazers:32Issues:1Issues:0

CPT

[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

Language:PythonLicense:MITStargazers:29Issues:4Issues:2

ShiftAddViT

[NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer

Language:PythonLicense:Apache-2.0Stargazers:28Issues:5Issues:1

PipeGCN

[ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Youjie Li, Cameron R. Wolfe, Anastasios Kyrillidis, Nam Sung Kim, Yingyan Lin

Language:PythonLicense:MITStargazers:27Issues:3Issues:1

Patch-Fool

[ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin

Language:PythonLicense:MITStargazers:25Issues:1Issues:0

Castling-ViT

[CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Language:PythonLicense:Apache-2.0Stargazers:24Issues:0Issues:0

DNN-Chip-Predictor

[ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architectures

SuperTickets

[ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning

Language:PythonLicense:MITStargazers:17Issues:2Issues:0

ViTALiTy

ViTALiTy (HPCA'23) Code Repository

Language:PythonLicense:Apache-2.0Stargazers:17Issues:1Issues:1

NeRFool

[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin

Language:PythonLicense:MITStargazers:14Issues:3Issues:2

S3-Router

[NeurIPS 2022] "Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing" by Yonggan Fu, Yang Zhang, Kaizhi Qian, Zhifan Ye, Zhongzhi Yu, Cheng-I Lai, Yingyan Lin

Language:PythonLicense:MITStargazers:14Issues:4Issues:0

ShiftAddNAS

[ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks

Language:PythonLicense:MITStargazers:14Issues:2Issues:1

torchshiftadd

An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.

License:Apache-2.0Stargazers:10Issues:0Issues:0

HALO

The official code for [ECCV2020] "HALO: Hardware-aware Learning to Optimize"

Language:PythonLicense:MITStargazers:9Issues:3Issues:2

NASA

[ICCAD 2022] NASA: Neural Architecture Search and Acceleration for Hardware Inspired Hybrid Networks

Language:PythonStargazers:8Issues:1Issues:0

Linearized-LLM

[ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models

Language:PythonLicense:Apache-2.0Stargazers:7Issues:3Issues:0
Language:CStargazers:6Issues:0Issues:0
Language:PythonLicense:MITStargazers:4Issues:2Issues:1

ACT

[ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration

Language:PythonStargazers:3Issues:0Issues:0

Early-Bird-GCN

[AAAI 2022] Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets

Language:PythonLicense:Apache-2.0Stargazers:3Issues:2Issues:1

EyeCoD

[ISCA 2022] EyeCoD: Eye Tracking System Acceleration via FlatCam-based Algorithm & Accelerator Co-Design

License:MITStargazers:3Issues:1Issues:0

InstantNet

[DAC 2021] "InstantNet: Automated Generation and Deployment of Instantaneously Switchable-Precision Networks" by Yonggan Fu, Zhongzhi Yu, Yongan Zhang, Yifan Jiang, Chaojian Li, Yongyuan Liang, Mingchao Jiang, Zhangyang Wang, Yingyan Lin

Language:PythonLicense:MITStargazers:3Issues:3Issues:0

Edge-LLM

[DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting

Language:PythonStargazers:2Issues:0Issues:0

Spline-EB

[TMLR] Max-Affine Spline Insights Into Deep Network Pruning

Language:PythonLicense:MITStargazers:1Issues:1Issues:1