Shivam Aggarwal (shivmgg)

shivmgg

Geek Repo

Location:Singapore

Home Page:shivmgg.github.io

Twitter:@shivmgg

Github PK Tool:Github PK Tool


Organizations
Electroholics

Shivam Aggarwal's starred repositories

Awesome-Transformer-Attention

An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites

deepsparse

Sparsity-aware deep learning inference runtime for CPUs

Language:PythonLicense:NOASSERTIONStargazers:2947Issues:55Issues:130

PipeCNN

An OpenCL-based FPGA Accelerator for Convolutional Neural Networks

Language:CLicense:Apache-2.0Stargazers:1217Issues:73Issues:176

Efficient-Deep-Learning

Collection of recent methods on (deep) neural network compression and acceleration.

ToMe

A method to increase the speed and lower the memory footprint of existing vision transformers.

Language:PythonLicense:NOASSERTIONStargazers:911Issues:113Issues:36

awesome-multi-task-learning

2023 up-to-date list of DATASETS, CODEBASES and PAPERS on Multi-Task Learning (MTL), from Machine Learning perspective.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:546Issues:14Issues:15

DeltaPapers

Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.

License:MITStargazers:267Issues:16Issues:0
Language:PythonLicense:Apache-2.0Stargazers:181Issues:22Issues:15

tapa

TAPA is a dataflow HLS framework that features fast compilation, expressive programming model and generates high-frequency FPGA accelerators.

Language:C++License:MITStargazers:144Issues:9Issues:140

nntrainer

NNtrainer is Software Framework for Training Neural Network Models on Devices.

Language:C++License:Apache-2.0Stargazers:139Issues:14Issues:675

microxcaling

PyTorch emulation library for Microscaling (MX)-compatible data formats

Language:PythonLicense:MITStargazers:131Issues:7Issues:19
Language:PythonLicense:Apache-2.0Stargazers:79Issues:7Issues:29

EcoFormer

[NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"

Language:PythonLicense:Apache-2.0Stargazers:66Issues:5Issues:1

mlir-cgra

An MLIR dialect to enable the efficient acceleration of ML model on CGRAs.

Language:C++License:BSD-3-ClauseStargazers:47Issues:2Issues:2

sparsity-in-deep-learning

Bibtex for Sparsity in Deep Learning paper (https://arxiv.org/abs/2102.00554) - open for pull requests

Language:TeXStargazers:39Issues:16Issues:0

lisa

A portable framework to map DFG (dataflow graph, representing an application) on spatial accelerators.

Language:DockerfileLicense:MITStargazers:36Issues:4Issues:5

micro22-sparseloop-artifact

MICRO22 artifact evaluation for Sparseloop

Language:Jupyter NotebookStargazers:33Issues:2Issues:4

Structure-LTH

[ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wang, Zhangyang Wang.

Language:CudaLicense:MITStargazers:30Issues:8Issues:2

Lifelong-Learning-LTH

[ICLR 2021] "Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chang, Zhangyang Wang

Language:PythonLicense:MITStargazers:22Issues:10Issues:1

ACDC

Code for reproducing "AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks" (NeurIPS 2021)

Language:PythonLicense:Apache-2.0Stargazers:20Issues:7Issues:2

tapa

TAPA is a dataflow HLS framework that features fast compilation, expressive programming model and generates high-frequency FPGA accelerators. [See https://github.com/UCLA-VAST/tapa for issues & pull requests]

Language:C++License:MITStargazers:19Issues:3Issues:0

PRE-DFKD

Official implementation of the work titled "Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay"

Progressive-Pruning

Official pytorch code for "APP: Anytime Progressive Pruning" (DyNN @ ICML, 2022; CLL @ ACML, 2022, SNN @ ICML, 2022 and SlowDNN 2023)

Language:PythonStargazers:16Issues:2Issues:0
Language:PythonStargazers:12Issues:1Issues:0
Language:VerilogLicense:Apache-2.0Stargazers:12Issues:2Issues:0

JetsonPower

Framework for energy monitoring and measurement on NVIDIA Jetson boards

Language:PythonLicense:Apache-2.0Stargazers:3Issues:0Issues:0