Mattral / LLM-Improving-Trained-Models-with-RLHF

Experimented with the three essential Reinforcement Learning with Human Feedback (RLHF) process stages. It starts by revisiting the Supervised Fine-Tuning (SFT) process, then proceeds with the training of a reward model, and finally concludes with the reinforcement learning phase. We explored and applied methods such as 4-bit quantization and LoRA

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Mattral/LLM-Improving-Trained-Models-with-RLHF Stargazers