Donggeun Yu's starred repositories
Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
AITemplate
AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
line_profiler
Line-by-line profiling for Python
Awesome-Pruning
A curated list of neural network pruning resources.
terraform-provider-google
Terraform Provider for Google Cloud Platform
Knowledge-Distillation-Zoo
Pytorch implementation of various Knowledge Distillation (KD) methods.
sparse_attention
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
blocksparse
Efficient GPU kernels for block-sparse matrix multiplication and convolution
Awesome_Prompting_Papers_in_Computer_Vision
A curated list of prompt-based paper in computer vision and vision-language learning.
Awesome-Masked-Autoencoders
A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).
filter-pruning-geometric-median
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
DynamicViT
[NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification
quantized_distillation
Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"
EfficientTrain
1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.
pytorch-lars
"Layer-wise Adaptive Rate Scaling" in PyTorch
batchnorm-pruning
Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124
hands-on-terraform-with-gcp
Let's learn terraform with GCP step by step
dorefanet-pytorch
dorefanet-pytorch 구현