lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MT-bench (llm_judge) error with Llama3: <class 'openai.error.InvalidRequestError'> This model's maximum context length is 8192 tokens

JianbangZ opened this issue · comments

I tried to test the MT-bench score for the Meta-Llama3-8B-Instruct model, and it keeps popping maximum context length errors
<class 'openai.error.InvalidRequestError'> This model's maximum context length is 8192 tokens. However, your messages resulted in 8921 tokens. Please reduce the length of the messages.

Thus the resulting score is low, only 7.42