AILab-CVC / SEED

Official implementation of SEED-LLaMA (ICLR 2024).

Home Page:https://ailab-cvc.github.io/seed

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question on how task27 generates images

JunZhan2000 opened this issue · comments

Hello. I saw this description in your paper.

Evaluation of text and image output. We first employ an answer ranking strategy to select the most likely text prediction. If it matches the ground truth, we evaluate the image output using the CLIP similarity score [50] between the generated image and each candidate. The model is deemed correct only if both text and image predictions match the ground truth.

I'm a little confused. How do I generate the image, first spell the corresponding text answer and then generate the image, or generate the image directly from the question?
Thanks for your work!