Miguel Ángel Medina Ramírez's starred repositories
LLMs-from-scratch
Implementing a ChatGPT-like LLM in PyTorch from scratch, step by step
alignment-handbook
Robust recipes to align language models with human and AI preferences
mixtral-offloading
Run Mixtral-8x7B models in Colab or consumer desktops
AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
TransformerLens
A library for mechanistic interpretability of GPT-style language models
awesome-llm-interpretability
A curated list of Large Language Model (LLM) Interpretability resources.
TinderBotz
Automated Tinder bot and scraper using selenium in python.
LLM-workshop-2024
A 4-hour coding workshop to understand how LLMs are implemented and used
awesome-llm-human-preference-datasets
A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.
dora-from-scratch
LoRA and DoRA from Scratch Implementations
CircuitsVis
Mechanistic Interpretability Visualizations using React
sparse_coding
Using sparse coding to find distributed representations used by neural networks.
CKA-similarity
An Numpy and PyTorch Implementation of CKA-similarity with CUDA support
goodai-ltm-benchmark
A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you need to evaluate your own agents. See more in the blogpost:
gpt2-greater-than
Code Release for the 2023 NeurIPS Paper How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
nn_pruning_uniqueness
Prune a model while finetuning or training.