lorenzofamiglini / instructGOOSE

Implementation of Reinforcement Learning from Human Feedback (RLHF)

Home Page:https://xrsrke.github.io/instructGOOSE/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

InstructGoose - 🚧 WORK IN PROGRESS 🚧

Paper: InstructGPT - Training language models to follow instructions with human feedback

image.png

Questions

  • In the context of RLHF, how to calculate the $L_t^{V F}(\theta)$,
    • Like it’s a function of the PPO agent uses to predict how much reward it gets if generates the sequence?
  • Does the RL model and the SFT model use the same tokenizer? Yes
  • I don’t know how to returns the logit of the generation model
  • Does the PPO Agent (Language Model) has a value network just like the regular PPO Agent?
  • I don’t understand how to calculate the advantage in PPO

Install

pip install instruct-goose

Train the RL-based language model

from transformers import AutoTokenizer, AutoModelForCausalLM
from instruct_goose import RLHFTrainer, create_reference_model, RLHFConfig
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
ref_model = create_reference_model(model)

Train the reward model

TODO

  • Add support batch inference for agent
  • Add support batch for RLHF trainer

Resources

I used these resources to implement this

About

Implementation of Reinforcement Learning from Human Feedback (RLHF)

https://xrsrke.github.io/instructGOOSE/

License:MIT License


Languages

Language:Jupyter Notebook 96.3%Language:Python 3.7%Language:CSS 0.1%