ZZK's repositories
faster-nougat
Implementation of nougat that focuses on processing pdf locally.
tiny-gpu
A minimal GPU design in Verilog to learn how GPUs work from the ground up
EETQ
Easy and Efficient Quantization for Transformers
BitBLAS
BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
lightning-thunder
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
open-gpu-kernel-modules
NVIDIA Linux open GPU with P2P support
quanto
A pytorch Quantization Toolkit
triton
Development repository for the Triton language and compiler
auto-round
SOTA Weight-only Quantization Algorithm for LLMs
attorch
A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.
qllm-eval
Code Repository of Evaluating Quantized Large Language Models
cutlass_master
CUDA Templates for Linear Algebra Subroutines
cccl
CUDA C++ Core Libraries
cudnn-frontend
cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it
Triton-Puzzles
Puzzles for learning Triton
GPUSorting
OneSweep, implemented in CUDA, D3D12, and Unity style compute shaders. Theoretically portable to all wave/warp/subgroup sizes.
APPy
APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to OpenMP, and automatically compiles the annotated code to GPU kernels.
KVQuant
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
LLMRoofline
Compare different hardware platforms via the Roofline Model for LLM inference tasks.
KIVI
KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache
fp6_llm
An efficient GPU support for LLM inference with 6-bit quantization (FP6).
marlin
FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.
gemma_pytorch
The official PyTorch implementation of Google's Gemma models