Benjamin Anderson's repositories
rewardmodeling
Train reward models for reinforcement learning from human feedback (RLHF).
mlx-soft-moe
implementation of soft mixture-of-experts in mlx
andersonbcdefg
Config files for my GitHub profile.
andersonbcdefg.github.io
My webpages
bufferpiece
SentencePiece tokenizer that operates on utf-8 bytes.
vision-models
My implementation of recent cutting-edge computer vision models.
brrrotary-embedding
PyTorch rotary position embedding (RoPE) that goes brrrr.
contrastive_losses
functional implementations of contrastive losses
CTranslate2
Fast inference engine for Transformer models
embedding-laser
compressing embedding models
ezblock
simple website blocking CLI tool leveraging etc/hosts
instruct-pythia-ptuning
Using p-tuning (a form of parameter-efficient fine-tuning) to tune Pythia models on natural language instructions.
mlx-data
Efficient framework-agnostic data loading
mlx-examples
Examples in the MLX framework
shoggoth-chat
Chat with different alien entities in the latent space of gpt-3.5-turbo-0301
signals-ai
AI Journal
simlm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities