Eugene Yan's starred repositories
accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
instructor
structured outputs for llms
alignment-handbook
Robust recipes to align language models with human and AI preferences
higgsfield
Fault-tolerant, highly scalable GPU orchestration, and a machine learning framework designed for training models with billions to trillions of parameters
CTranslate2
Fast inference engine for Transformer models
lm-human-preferences
Code for the paper Fine-Tuning Language Models from Human Preferences
hallucination-leaderboard
Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
summarize-from-feedback
Code for "Learning to summarize from human feedback"
obsidian-kindle-plugin
Sync your Kindle notes and highlights directly into your Obsidian vault
bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.
llm_distillation_playbook
Best practices for distilling large language models.
get-lambda
Use Actions to acquire those precious lambda GPUs
swe-study-group
Code for the SWE study group