bogwero / quiet-star

Code for Quiet-STaR

Home Page:https://arxiv.org/abs/2403.09629

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Quiet-STaR

Code for Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking.

This project is implemented by simply patching the base Mistral implementation in Huggingface transformers using a new modeling_mistral.py and a new configuration_mistral.py and otherwise applying standard transformers features (e.g. the default Trainer). Our patches were applied to Huggingface's transformers version 4.37.0.dev0 under src/transformers/models/mistral/ -- we cannot guarantee that other changes to their implementation will not affect our implementation, so for reproducibility, we encourage using the same version.

One pitfall to be wary of: the model is not taught not to generate start and end thought tokens. Thus, when performing actual inference, it is necessary to mask these out.

We make an 8-thought-token ahead (including start and end tokens) model available via Huggingface.

About

Code for Quiet-STaR

https://arxiv.org/abs/2403.09629

License:Apache License 2.0


Languages

Language:Python 100.0%