Liu Yue's starred repositories
RWKV-LM
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
scikit-image
Image processing in Python
taming-transformers
Taming Transformers for High-Resolution Image Synthesis
efficient-kan
An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
torchscale
Foundation Architecture for (M)LLMs
torchtitan
A native PyTorch Library for large model training
vq-vae-2-pytorch
Implementation of Generating Diverse High-Fidelity Images with VQ-VAE-2 in PyTorch
ThunderKittens
Tile primitives for speedy kernels
ViT-Adapter
[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
WritingAIPaper
Writing AI Conference Papers: A Handbook for Beginners
infinibatch
Efficient, check-pointed data loading for deep learning with massive data sets.
ControlCap
[ECCV 2024] ControlCap: Controllable Region-level Captioning
VMamba_onnx
Export VMamba to onnx. VMamba: Visual State Space Models,code is based on VMamba: https://github.com/MzeroMiko/VMamba