KylinChen's repositories
ChatFinance
金融财报问答大模型LLM
mmVital-Signs
mmVital-Signs project aims at vital signs detection and provide standard python API from Texas Instrument (TI) mmWave hardware, such as xWR14xx, xWR16xx and xWR68xx.
InstaPicture
一款用react和python开发的图片推荐社交软件,支持内容推荐、好友圈、用户相册等功能。
ChinaVis-Challenge-2020
ChinaVis可视化挑战赛作品--经济与舆情影响分析
KylinC.github.io
A blog for cute cute KylinChen
RoboGrab-Sorter
This project utilizes deep reinforcement learning techniques to train a robot, which combines a mobile platform and a Panda robotic arm, to automatically grasp objects on a tabletop and classify them.
SJTU-CS489
Reinforcement Learning course project.
WhatTheHellBackingSchool
返校通知生成器/手机网页端APP/Simple Flask Webpage for joking.
alpa-review
Training and serving large-scale neural networks with auto parallelization.
Awesome-Efficent-LLM-Inference
✨✨Latest Advances on Efficient LLM Inference.
Awesome-LLM-Inference
📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
DistServe
Disaggregated serving system for Large Language Models (LLMs).
LaVIT
LaVIT: Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization
Llama-3-Distill
Distillation version of llama-68m only for MLsys research use.
llm-analysis
Latency and Memory Analysis of Transformer Models for Training and Inference
nerf
Code release for NeRF (Neural Radiance Fields)
nerf_pl
NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
NN-CUDA-Example
Several simple examples for popular neural network toolkits calling custom CUDA operators.
PaddleClas
A treasure chest for visual classification and recognition powered by PaddlePaddle
Paper-Reading-Lists
Random collections of my interested research papers / projects
SJTUThesis
上海交通大学 XeLaTeX 学位论文及课程论文模板 Shanghai Jiao Tong University XeLaTeX Thesis Template
sys-deadlines
Countdown for systems conference deadlines
vllm
A high-throughput and memory-efficient inference and serving engine for LLMs