There are 0 repository under human-feedback-data topic.
EMNLP 2020: "Dialogue Response Ranking Training with Large-Scale Human Feedback Data"
BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
The Prism Alignment Project
Easily collect yes/no feedback on language model outputs from humans