Chaofan Lin's repositories
Triton-Puzzles-Lite
Puzzles for learning Triton, play it with minimal environment configuration!
Masterball
A Compiler from "Mx* language" (A C++ & Java like language) to RV32I Assembly, with optimizations on LLVM IR. SJTU CS2966 Project.
NightWizard
SJTU CS2951 Computer Architecture Course Project, A Verilog HDL implemented RISC-V CPU.
CoconutJVM
A toy JVM (Java Virtual Machine) written in C++. For learning purpose.
homework-latex-template
A single-file LaTeX template for my assignments
hexo-theme-cactus-customized
:cactus: A responsive, clean and simple theme for Hexo.
siriusneo.github.io
Metric Space: New personal blog, powered by Hexo and Cactus theme!
SJTU-CS3952-Database-System
SJTU CS3952, Database System Project. Bookstore Backend Logic.
Touhou-QQBot
用于某车车群的 QQ 机器人,基于 Miral 和 Graia 框架
DarkSwordVM
A toy just-in-time (JIT) virtual machine in LLVM IR. SJTU CS2965 Project.
GOODBOUNCE
🏀 Balancing is Boring, Let’s Try Bouncing! SJTU CS3316 Reinforcement Learning Course Project.
ParrotServe
[OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Quest
[ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference
rCore-Tutorial-v3-THU-AOS-Course-Lab
80240442 Advanced Operating System Course Lab, rCore v3, my solutions. ---- Let's write an OS which can run on RISC-V in Rust from scratch!
relax
Temp repo for prototyping relax(relay next), the effort will be upstreamed. We use the wiki pages on this repo to host design docs.
Social-Recommender-Systems
🤗 CS3612 Machine Learning Course Project, An exploration in Social Recommender Systems.
THU-Computer-Graphics
70240243 Computer Graphics Course Project, Tsinghua University
tilelang
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
Triton-Puzzles
Puzzles for learning Triton
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs