dvlab-research / MGM

Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Use of ocr in Evaluation

bruceisme opened this issue · comments

commented

In Appendix A's Image-text Data Collection, mention "It is important to note that the
OCR detector is utilized solely for generating enriched data and is not employed during testing
". But the textvqa scripts is using llava_textvqa_val_v051_ocr.jsonl which has ocr. So have you ever test a version without ocr in textvqa, was it worse than llava_textvqa_val_v051_ocr.jsonl ? can we understand that model could get better result with ocr input?
5a6ce66bec9d6006880fe0724c32204

Hi, the word in Appendix A means that we do not perform an extra PaddleOCR detector for evaluation. For the TextVQA, we keep the OCR Token with that in LLaVA. It should have a worse result without the original OCR tokens.