anthropics / hh-rlhf

Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"

Home Page:https://arxiv.org/abs/2204.05862

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

anthropics/hh-rlhf Issues

No issues in this repository yet.