Donggeun Yu (DonggeunYu)

DonggeunYu

Geek Repo

Company:SI Analytics

Location:Korea

Home Page:https://yudonggeun.github.io

Github PK Tool:Github PK Tool


Organizations
SIAnalytics

Donggeun Yu's starred repositories

ruff

An extremely fast Python linter and code formatter, written in Rust.

Language:RustLicense:MITStargazers:31373Issues:78Issues:5187

k9s

🐶 Kubernetes CLI To Manage Your Clusters In Style!

Language:GoLicense:Apache-2.0Stargazers:26690Issues:149Issues:1828

unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities

Language:PythonLicense:MITStargazers:19613Issues:301Issues:1356

Swin-Transformer

This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

Language:PythonLicense:MITStargazers:13666Issues:127Issues:312

AITemplate

AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.

Language:PythonLicense:Apache-2.0Stargazers:4538Issues:82Issues:244

mmyolo

OpenMMLab YOLO series toolbox and benchmark. Implemented RTMDet, RTMDet-Rotated,YOLOv5, YOLOv6, YOLOv7, YOLOv8,YOLOX, PPYOLOE, etc.

Language:PythonLicense:GPL-3.0Stargazers:2943Issues:33Issues:393

line_profiler

Line-by-line profiling for Python

Language:PythonLicense:NOASSERTIONStargazers:2674Issues:15Issues:97

torchinfo

View model summaries in PyTorch!

Language:PythonLicense:MITStargazers:2525Issues:18Issues:157

Awesome-Pruning

A curated list of neural network pruning resources.

terraform-provider-google

Terraform Provider for Google Cloud Platform

Language:GoLicense:MPL-2.0Stargazers:2307Issues:134Issues:9654

Knowledge-Distillation-Zoo

Pytorch implementation of various Knowledge Distillation (KD) methods.

kernl

Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.

Language:Jupyter NotebookLicense:Apache-2.0Stargazers:1525Issues:29Issues:174

sparse_attention

Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"

blocksparse

Efficient GPU kernels for block-sparse matrix multiplication and convolution

Language:CudaLicense:MITStargazers:1022Issues:198Issues:48

Awesome_Prompting_Papers_in_Computer_Vision

A curated list of prompt-based paper in computer vision and vision-language learning.

Awesome-Masked-Autoencoders

A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).

filter-pruning-geometric-median

Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)

DynamicViT

[NeurIPS 2021] [T-PAMI] DynamicViT: Efficient Vision Transformers with Dynamic Token Sparsification

Language:Jupyter NotebookLicense:MITStargazers:563Issues:10Issues:44

quantized_distillation

Implements quantized distillation. Code for our paper "Model compression via distillation and quantization"

Language:PythonLicense:MITStargazers:329Issues:10Issues:17

EfficientTrain

1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundation visual backbones.

Language:PythonLicense:MITStargazers:197Issues:6Issues:11

RevTorch

Framework for creating (partially) reversible neural networks with PyTorch

Language:PythonLicense:BSD-3-ClauseStargazers:145Issues:6Issues:7

minREV

A simple minimal implementation of Reversible Vision Transformers

AO2-DETR

AO2-DETR: Arbitrary-Oriented Object Detection Transformer

Language:PythonLicense:Apache-2.0Stargazers:85Issues:5Issues:29

pytorch-lars

"Layer-wise Adaptive Rate Scaling" in PyTorch

Language:PythonLicense:MITStargazers:85Issues:7Issues:3

batchnorm-pruning

Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers https://arxiv.org/abs/1802.00124

siatune

Hyperparameter Tuning Toolbox for OpenMMLab Frameworks, especially for Remote Sensing Tasks

Language:PythonLicense:Apache-2.0Stargazers:47Issues:3Issues:30

ToST

[ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang

Language:PythonStargazers:23Issues:11Issues:0

hands-on-terraform-with-gcp

Let's learn terraform with GCP step by step

dorefanet-pytorch

dorefanet-pytorch 구현

Language:PythonLicense:MITStargazers:7Issues:1Issues:0