Shengyu Liu (interestingLSY)

interestingLSY

Geek Repo

Company:Peking University

Location:Peking University, Beijing, China

Home Page:https://interestinglsy.github.io/

Github PK Tool:Github PK Tool


Organizations
intoj
LLMServe

Shengyu Liu's repositories

swiftLLM

A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of vLLM).

Language:PythonLicense:Apache-2.0Stargazers:60Issues:0Issues:0

CUDA-From-Correctness-To-Performance-Code

Codes & examples for "CUDA - From Correctness to Performance"

Language:C++License:Apache-2.0Stargazers:23Issues:2Issues:0
Language:C++License:MITStargazers:12Issues:2Issues:0
Language:C++License:MITStargazers:5Issues:2Issues:0

intServer

A high performance minecraft server, in C++20

Language:JavaScriptStargazers:4Issues:1Issues:0

NeuroFrame

A tiny framework for AI training and inference. My homework for the course "programming in AI".

Language:C++License:LGPL-2.1Stargazers:2Issues:1Issues:0
Language:HTMLStargazers:1Issues:1Issues:0
Language:VueLicense:GPL-3.0Stargazers:1Issues:0Issues:0
Language:PythonLicense:GPL-3.0Stargazers:1Issues:2Issues:0
Language:JavaScriptLicense:GPL-3.0Stargazers:1Issues:2Issues:0
Language:PythonLicense:MITStargazers:1Issues:1Issues:0

node-stratum-pool

High performance Stratum poolserver in Node.js

Language:JavaScriptLicense:GPL-2.0Stargazers:1Issues:1Issues:0

NoobPool

NoobPool 菜鸟矿池

License:AGPL-3.0Stargazers:1Issues:2Issues:0

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

Language:PythonLicense:Apache-2.0Stargazers:1Issues:0Issues:0

cherry-markdown

✨ A Markdown Editor

Language:JavaScriptLicense:NOASSERTIONStargazers:0Issues:1Issues:0

contrib-mlcommons-ck

Fork of the repo "mlcommons/ck" that add multi-node pytorch SUT's implementation

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

contrib-mlcommons-inference

Fork of the repo "mlcommons/inference" that implements multi-node pytorch SUT

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

cpuminer-opt

Optimized multi algo CPU miner

Language:CLicense:NOASSERTIONStargazers:0Issues:1Issues:0

DeepSpeed-MII

MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.

License:Apache-2.0Stargazers:0Issues:0Issues:0
License:NOASSERTIONStargazers:0Issues:0Issues:0

gitignore

A collection of useful .gitignore templates

License:CC0-1.0Stargazers:0Issues:1Issues:0
Language:HTMLLicense:GPL-3.0Stargazers:0Issues:2Issues:0

lightllm

LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.

License:Apache-2.0Stargazers:0Issues:0Issues:0

luogu-problem-list

A problem list for luogu OJ.

Stargazers:0Issues:1Issues:0

Pintos-gitbook

The gitbook for Pintos project in Peking University.

Stargazers:0Issues:1Issues:0

pyecm

Pyecm factors large integers (up to 50 digits) using the Elliptic Curve Method (ECM), a fast factoring algorithm.

Language:PythonLicense:GPL-2.0Stargazers:0Issues:1Issues:0

pysinsy

Python wrapper for Sinsy

Language:PythonLicense:MITStargazers:0Issues:1Issues:0
Language:CStargazers:0Issues:1Issues:0