Just documenting what I learn :)
-
Implemented video code (Andrej Karpathy)
video Link: https://www.youtube.com/watch?v=kCc8FmEb1nY
-
Tried with Harry potter dataset (1 book)
-
Tried with all 7 books- decent results
-
increased batch-size (context length) and used frequency encoding (character based) - better loss (1.8)
-
Increased epochs - better results (final loss - 1.4 )
-
Tried bert based encoding and decoding
After 1 epoch loss = 10.47
After 100 epochs loss = 6.31
After 300 epochs loss = 5.59
colab link: https://colab.research.google.com/drive/1w2xrCzgQ7PejGULuiaOY_mQfZKdtH8UW
- Torch LR finder: the learning rate is increased after each processed batch and the corresponding loss is logged. The result of this is a lr vs. loss plot that can be used as guidance for choosing a optimal initial lr.
- One cycle policy: start with a lower learning rate, gradually increase it to a maximum value (the peak of the cycle), and then gradually decrease it again. Additionally, during the cycle, the momentum is typically increased and then decreased. The intuition is that starting with a lower learning rate helps the model converge, and using a higher learning rate during the middle of training allows the model to escape sharp minima and explore the loss landscape more efficiently. Towards the end of training, reducing the learning rate helps the model converge to a more refined solution.
- LoRA: (Low-Rank Adaptation of Large Language Models) is a popular and lightweight training technique that significantly reduces the number of trainable parameters. It works by inserting a smaller number of new weights into the model and only these are trained.
- QLoRA: Employs the following key techniques: 4-bit quantization of the full pretrained language model to compress weights and reduce memory requirements using a novel NormalFloat encoding optimized for the distribution of neural network weights.
- Dataset: openassistant-guanaco
- Colab Link: https://drive.google.com/file/d/1rOf5DO0L4WYm1VNzp04gB_0MjOF0pLoG/view?usp=sharing
Yet to fix a few things. Will update soon :)
Fine tuning Falcon-7B with QLoRA with an additional SFTTrainer. Ran out of disk space before it could complete one epoch. Attaching the loss curve below.
- SFTTrainer: Supervised fine-tuning (or SFT for short) is a crucial step in RLHF. SFTTrainer provides an easy-to-use API to create SFT models and train them with few lines of code on the same dataset.
- Dataset: openassistant-guanaco
- Colab Link: https://colab.research.google.com/drive/1P5uOvZPGqic21I9diYaB3UGwUHoK37Yh?usp=sharing