Wang-Haining / RLAM

Reinforcement Learning from Accessibility Measures

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reinforcement Learning from Accessibility Measures

This repository accompanies the paper Science Out of Its Ivory Tower: Improving Scholarship Accessibility with Reinforcement Learning.

Similar to Reinforcement Learning from Human Feedback (RLHF), we use reinforcement learning to enhance a language model's capabilities beyond the standard cross-entropy objective. However, unlike RLHF, which relies on a reward model to represent "human preference," we focus on document readability, a metric that is easier to quantify using established measurements (e.g., the Automatic Readability Index and Flesch-Kincaid Grade Level). Instead of building a reward model, we guide the optimization process using heuristics such as average sentence length and word accessibility (defined as the natural logarithm of token occurrences per 1 billion tokens in English Wikipedia). By carefully balancing these accessibility measures, our model achieved an additional improvement of three grade levels in readability without sacrificing faithfulness or language quality. The simplified abstracts generated by our model are now accessible to readers at a level equivalent to adults without a college degree.

We hope this work helps bridge the gap between scholarly research and a broader audience, fosters the development of better simplification systems, and ultimately contributes to a more informed and engaged society.

Training Logs

Training logs of the reported runs are hosted on WANDB.

Reproduction

To reproduce the results, follow these steps:

python3.10 -m venv .venv
source .venv/bin/activate
python -m pip install -r requirements.txt

SFT and Word Accessibility Model

Refer to this repository for details on the SFT and Word Accessibility Model.

RLAM/RLARI

Refer to the 'runs' folder for training and evaluation scripts used to initiate runs on Slurm.

Generations

For the generations reported in the Findings section, we used outputs from the following checkpoints. Additionally, we annotated the quality of the first ten samples with respect to language quality, faithfulness, and completeness.

Model $\beta_{SL}$ $\beta_{WA}$ URL
RLARI - - rl_ari_gemma-2b__42__1723771398%7Cstep_400_ari_12.18.csv
RLAM 0.05 4.0 rl_am_gemma-2b_sl_coef5e-2__42__1723772226%7Cstep_1550_ari_13.4.csv
RLAM 0.08 4.0 rl_am_gemma-2b_sl_coef8e-2__42__1723597090%7Cstep_1250_ari_13.64.csv
RLAM 0.10 4.0 rl_am_gemma-2b_sl_coef1e-1__42__1723692435%7Cstep_1350_ari_13.28.csv
RLAM 0.20 4.0 rl_am_gemma-2b_sl_coef2e-1__42__1724286966%7Cstep_1700_ari_12.17.csv

Contact

hw56@indiana.edu

License

0BSD

Reference

TODO

About

Reinforcement Learning from Accessibility Measures

License:BSD Zero Clause License


Languages

Language:Python 85.4%Language:Shell 14.6%