Hao Zhao's starred repositories
GPT-Fathom
GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well as OpenAI's earlier models on 20+ curated benchmarks under aligned settings.
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
llm-foundry
LLM training code for Databricks foundation models
direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
mistral-inference
Official inference library for Mistral models
why-weight-decay
Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]
sam-low-rank-features
Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]
instagraph
Converts text input or URL into knowledge graph and displays
Flash-Attention-Softmax-N
CUDA and Triton implementations of Flash Attention with SoftmaxN.
open-interpreter
A natural language interface for computers
awesome_lists
Awesome Lists for Tenure-Track Assistant Professors and PhD students. (助理教授/博士生生存指南)
awesome-obsidian
🕶️ Awesome stuff for Obsidian
loss-landscapes
Approximating neural network loss landscapes in low-dimensional parameter subspaces for PyTorch
AlpacaDataCleaned
Alpaca dataset from Stanford, cleaned and curated
awesome-source-free-test-time-adaptation
A curated list of papers in Test-time Adaptation, Test-time Training and Source-free Domain Adaptation
lm-evaluation-harness
A framework for few-shot evaluation of language models.