Seokjoong Kim's starred repositories
liquidSVM
Support vector machines (SVMs) and related kernel-based learning algorithms are a well-known class of machine learning algorithms, for non-parametric classification and regression. liquidSVM is an implementation of SVMs whose key features are: fully integrated hyper-parameter selection, extreme speed on both small and large data sets, full flexibility for experts, and inclusion of a variety of different learning scenarios: multi-class classification, ROC, and Neyman-Pearson learning, and least-squares, quantile, and expectile regression.
paper-review
deep learning paper review about model, system, hw architecture
inter-operator-scheduler
[MLSys 2021] IOS: Inter-Operator Scheduler for CNN Acceleration
pytorch-lightning
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
ai-notebooks
Some ipython notebooks implementing AI algorithms
ViT-pytorch
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
minimal-BERT
Bidirectional Encoder Representations from Transformers
Korea-Startups
🌟 국내 스타트업 목록 및 설명 🌟
commit-autosuggestions
A tool that AI automatically recommends commit messages.
gradient-checkpointing
Make huge neural nets fit in memory
big_transfer
Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.
proxylessnas
[ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
LambdaNetworks
Implementing Lambda Networks using Pytorch
model-tools
Tools for computing model parameters and FLOPs.
pytorch-OpCounter
Count the MACs / FLOPs of your PyTorch model.
Pruning-Filter-in-Filter
Pruning Filter in Filter(NeurIPS2020)
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
awesome-sushi
🍣 국내 스시 오마카세 맛집 리스트