Huanglk (HuangLK)

HuangLK

User data from Github https://github.com/HuangLK

Company:sysu

Location:Shenzhen, China

Home Page:https://www.baidu.com

GitHub:@HuangLK


Organizations
SYSUMSTC

Huanglk's repositories

transpeeder

train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism

Language:PythonLicense:Apache-2.0Stargazers:216Issues:6Issues:31

leeroy

一个基于pytorch-lightning和huggingface的NLP模型训练框架。

bigscience

Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.

Language:ShellLicense:NOASSERTIONStargazers:0Issues:0Issues:0

cs224n-winter-2017

All lecture notes, slides and assignments from CS224n: Natural Language Processing with Deep Learning class by Stanford

Language:HTMLStargazers:0Issues:1Issues:0

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

gpt-neox

An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

lightning-transformers

Flexible components pairing 🤗 Transformers with Pytorch Lightning

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0
Language:PythonLicense:MITStargazers:0Issues:0Issues:0

nebullvm

Plug and play modules to optimize the performances of your AI systems 🚀

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

Open-Assistant

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Language:PythonLicense:Apache-2.0Stargazers:0Issues:0Issues:0

trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

Language:PythonLicense:MITStargazers:0Issues:0Issues:0