EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval

Home Page:https://lmms-lab.github.io/lmms-eval-blog/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Typo in results sheet for llava-v1.6-vicuna-7b

FGiuliari opened this issue · comments

Hello, nice work on the benchmark.

I noticed a typo in the results sheet.
Cell J2 says "1.6-13B (lmms-eval)" but the details expressed in cell j3 says that it is vicuna 7B, not 13B.

Hello! We have fixed the sheet. Thank you so much!