Junsu Kim's starred repositories
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
capsule-render
๐ Dynamic Coloful Image Render
deeplearning
Python implementation of Deep Learning book
dlbook_notation
LaTeX files for the Deep Learning book notation
hammerspoon-foundation_remapping
Hammerspoon configuration script which remaps any keys for Sierra.
spoqa-han-sans
Spoqa Han Sans
sparktorch
Train and run Pytorch models on Apache Spark.
PyTorch-VAE
A Collection of Variational Autoencoders (VAE) in PyTorch.
tab-transformer-pytorch
Implementation of TabTransformer, attention network for tabular data, in Pytorch
lightning-transformers
Flexible components pairing ๐ค Transformers with :zap: Pytorch Lightning
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
dalle2-laion
Pretrained Dalle2 from laion
MM-CelebA-HQ-Dataset
[CVPR 2021] A large-scale face image dataset that allows text-to-image generation, text-guided image manipulation, sketch-to-image generation, GANs for face generation and editing, image caption, and VQA
BigGAN-PyTorch
The author's officially unofficial PyTorch BigGAN implementation.
github-profile-achievements
A collection listing all Achievements available on the GitHub profile ๐
awesome-deep-learning
A curated list of awesome Deep Learning tutorials, projects and communities.
vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch