Since 2021, there have been enormous advances in the field of NLP, most notably GPT-4 and ChatGPT. The aim of this repository is to understand the mechanics behind powerful language models by coding and training them from scratch. Currently, I am working on understanding the LLaMA model and using LoRA/qLoRA to train it more efficiently.
Jump to the karpathy-gpt directory.
Starting from a simple bigram language model, building out GPT from scratch. The completed GPT model be trained on character-level text data (e.g. tiny shakespeare) to generate convincing shakespeare-like text. Key concepts include:
- Self-Attention
- Scaled Dot-Product Attention
- Multi-Head Self-Attention
- Layer Normalization
Jump to the llama directory.
Building out the LLaMA 2 language model from Meta AI. The model can load pre-trained weights and perform inference. Key concepts include:
- Rotary Positional Embeddings
- KV-Cache
- Grouped-Query Attention
- RMSNorm
- SwiGLU Activation Function
Jump to the lora directory.
LoRA is a fine-tuning method that drastically reduces the number of trainable parameters in pre-trained language models by adding two trainable weight matrices to the original model weights. It is based on singular value decomposition, in which a large matrix is broken down into its component eigenvalues and eigenvectors. The most important information in a matrix is often contained in just the first few singular values. Key concepts include:
- Singular Value Decomposition
Jump to the lora directory.
A minimal reproduction of adding LoRA to fine-tune a pre-trained GPT2 model from OpenAI. A starting point for my own LLM fine-tuning method, adapted from diffusion models.