Brando Miranda's starred repositories
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
LLMs-from-scratch
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
language-server-protocol
Defines a common protocol for language servers.
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
x-transformers
A concise but complete full-attention transformer with a set of promising experimental features from various papers
direct-preference-optimization
Reference implementation for DPO (Direct Preference Optimization)
alpaca_farm
A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.
Portal-to-ISAbelle
https://albertqjiang.github.io/Portal-to-ISAbelle/
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
ultimate-anatome
Ἀνατομή is a PyTorch library to analyze representation of neural networks
ultimate-utils
Brando's utils
UnicodeSkipListTable
A library to create and use Unicode tables based on the skip list data structure.
UnicodeSkipListTableExample
A library that shows how to use the Unicode skip list tables generation tool to create a table to test if a codepoint is numeric.
Why-Has-Predicting-Downstream-Capabilities-Remained-Elusive
Code for Preprint: Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?