Shuchang Zhou's repositories
llama_infer
Inference script for Meta's LLaMA models using Hugging Face wrapper
kid-programming
Programming exercises for kids (no prior programming experience required)
alpaca-lora
Instruct-tune LLaMA on consumer hardware
accelerate
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
alpa
Training and serving large-scale neural networks
Alpaca-CoT
We extend CoT data to Alpaca to boost its reasoning ability. We are constantly expanding our collection of instruction-tuning data. (我们将CoT数据扩展到Alpaca以提高其推理能力,同时我们将不断收集更多的instruction-tuning数据集。)
chain-of-thought-hub
Benchmarking LLM reasoning performance w. chain-of-thought prompting
GPT-4-LLM
Instruction Tuning with GPT-4
GPTeacher
A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer
langchain
⚡ Building applications with LLMs through composability ⚡
MOSS
An open-source tool-augmented conversational language model from Fudan University
nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
petals
🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
pinyin-to-ipa
Command-line interface and Python library to transcribe pinyin to IPA. The tones are attached to the vowel of the syllable.
tau
Pipeline Parallelism for PyTorch
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
wechat-chatgpt
Use ChatGPT On Wechat via wechaty