Andy Zhao's repositories
ReasoningArxiv
Reasoning Arxiv feed, forked from MyArxiv
alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
dgl_forked
Python package built to ease deep learning on graph, on top of existing DL frameworks.
evo
DNA foundation modeling from molecular to genome scale
FastChat
An open platform for training, serving, and evaluating large languages. Release repo for Vicuna and FastChat-T5.
guidance
A guidance language for controlling large language models.
langchain
⚡ Building applications with LLMs through composability ⚡
lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
llama
Inference code for LLaMA models
nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
NBFNet
Official implementation of Neural Bellman-Ford Networks (NeurIPS 2021)
PandemicLLM
Code and Data for Adapting Large Language Models to Forecast Pandemics in Real-time: A COVID-19 Case Study.
PEER_Benchmark
PEER Benchmark, appear at NeurIPS 2022 Dataset and Benchmark Track (https://arxiv.org/abs/2206.02096)
picoGPT
An unnecessarily tiny implementation of GPT-2 in NumPy.
QLoRA
QLoRA: Efficient Finetuning of Quantized LLMs
single-cell-best-practices
This project is work in progress! https://www.sc-best-practices.org
TAG-Benchmark-dev
Benchmark
TLM
ICML'2022: NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework
torchdrug-dev
Private annotated version of torchdrug