imoneoi / openchat

OpenChat: Advancing Open-source Language Models with Imperfect Data

Home Page:https://openchat.team

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about `--per-sequence-loss`

Sanster opened this issue · comments

commented

In generate_dataset.py, there is a --per-sequence-loss arg, which used in conversation_template.py. This parameter further adjusts the weights based on the length of each response.

if seq_level_weight:

I would like to know, when training the OpenChat series models, have you enabled this parameter? What is the impact of this parameter on the training results? Thanks

commented

When this parameter is enabled, losses are averaged on a per-sequence basis, otherwise on a per-token basis (same as HF trainer). It is disabled by default because it causes worse results in our experiments, making the model worse at longer responses.