nlpxucan / WizardLM

LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A question about the results reported by WizardMath

AegeanYan opened this issue · comments

Hi, I'm recently doing some survey on math model. And I found that seems the result you reported on WizardMath is by zero-shot "let's think step by step"? (For MATH or GSM8K) However, seems llama-2 is using 8-shot to get the result. So I think the comparison is not very proper but your work is even stronger, right?