nlpxucan / WizardLM

LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How does it support multi-turn conversations?

jujeongho0 opened this issue · comments

Hi. Thanks for the great work.

I was wondering about the meaning of "WizardLM-2 adopts the prompt format from Vicuna and supports multi-turn conversation.", so I'm leaving an issue.

I know that the datasets evolved through the EVOL process are only single-turn (maybe trained with the prompt format from Vicuna), so I'm wondering how it supports multi-turn conversation.

Or, did you create a multi-turn dataset with the EVOL process and train on it in the form of a multi-turn conversation?

Thanks.