CHEN Yuhan (lzzmm)

lzzmm

Geek Repo

Company:HKUST(Guangzhou)

Location:Guangzhou

Home Page:https://lzzmm.github.io

Github PK Tool:Github PK Tool


Organizations
sysu

CHEN Yuhan's starred repositories

Language:C++Stargazers:12Issues:0Issues:0

desigen

Official code for paper: Desigen: A Pipeline for Controllable Design Template Generation [CVPR'24]

Language:PythonStargazers:52Issues:0Issues:0

BitDistiller

[ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.

Language:PythonLicense:MITStargazers:56Issues:0Issues:0

Megatron-DeepSpeed

Ongoing research training transformer language models at scale, including: BERT & GPT-2

Language:PythonLicense:NOASSERTIONStargazers:1793Issues:0Issues:0

LLMSys-PaperList

Large Language Model (LLM) Systems Paper List

Stargazers:533Issues:0Issues:0

ccf-deadlines

⏰ Collaboratively track deadlines of conferences recommended by CCF (Website, Python Cli, Wechat Applet) / If you find it useful, please star this project, thanks~

Language:VueLicense:MITStargazers:5642Issues:0Issues:0

matplotlib-cheatsheet

Matplotlib 3.1 cheat sheet.

Language:PythonLicense:BSD-2-ClauseStargazers:2896Issues:0Issues:0

line_profiler

Line-by-line profiling for Python

Language:PythonLicense:NOASSERTIONStargazers:2617Issues:0Issues:0

cutlass

CUDA Templates for Linear Algebra Subroutines

Language:C++License:NOASSERTIONStargazers:5107Issues:0Issues:0

Awesome-LLM-Prune

Awesome list for LLM pruning.

Stargazers:92Issues:0Issues:0

RedBird3D

3D model for HKUST Redbird (Sundial)

Stargazers:15Issues:0Issues:0

HKUST-GZ-MPhil-PQA-Template

Latex template for MPhil PQA at HKUST-GZ

Language:TeXLicense:LPPL-1.3cStargazers:5Issues:0Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:34344Issues:0Issues:0

Awesome_LLM_System-PaperList

Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of papers on accelerating LLMs, currently focusing mainly on inference acceleration, and related works will be gradually added in the future. Welcome contributions!

Stargazers:132Issues:0Issues:0

extension-script

Example repository for custom C++/CUDA operators for TorchScript

Language:PythonStargazers:112Issues:0Issues:0

Megatron-LM

Ongoing research training transformer models at scale

Language:PythonLicense:NOASSERTIONStargazers:9692Issues:0Issues:0

LLMs-Works-View

Focus on Current Works and Surveys in LLMs and its Application

Stargazers:3Issues:0Issues:0

FasterTransformer

Transformer related optimization, including BERT, GPT

Language:C++License:Apache-2.0Stargazers:5717Issues:0Issues:0

sputnik

A library of GPU kernels for sparse matrix operations.

Language:C++License:Apache-2.0Stargazers:239Issues:0Issues:0

Jike-crawl

Crawl and save posts, notifications, collections and other personal data from web.okjike.com

Language:PythonLicense:MITStargazers:4Issues:0Issues:0

LingLang

🎈 玲珑语言。静态编译型语言,无VM。

Language:C#License:MITStargazers:4Issues:0Issues:0

SYsU-lang

A mini, simple and modular compiler lab for SYsU/SysY(tiny C). Based on Clang/LLVM/ANTLR4/Bison/Flex.

Language:CLicense:NOASSERTIONStargazers:205Issues:0Issues:0

lzzmm.github.io

炸毛的秘密基地

Language:HTMLLicense:Apache-2.0Stargazers:7Issues:0Issues:0