UMass-Foundation-Model / 3D-LLM

Code for 3D-LLM: Injecting the 3D World into Large Language Models

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Get "" answer when run evaluate.py with scanqa.pth as resume_ckpt_path on scannet dataset

Xiaolong-RRL opened this issue · comments

Dear author:

Thanks for your interesting work.

When I run the following command:

cd 3DLLM_BLIP2-base
python evaluate.py --cfg-path lavis/projects/blip2/train/finetune_scanqa.yaml

The test_best_vqa_result.json I obtained is as follows:

{"question_id": 0, "answer": ""}, 
{"question_id": 1, "answer": ""}, 
{"question_id": 2, "answer": ""}, 
{"question_id": 3, "answer": ""}, 
{"question_id": 4, "answer": "the lower kitchen cabinets the same color"}, 
{"question_id": 5, "answer": ""}, 
...
{"question_id": 11, "answer": ""}, 
{"question_id": 12, "answer": "a white t"}, 
{"question_id": 13, "answer": ""}, 
{"question_id": 14, "answer": ""}, 
{"question_id": 15, "answer": ""}, 
{"question_id": 16, "answer": "a counter"}, 
{"question_id": 17, "answer": ""}, 
{"question_id": 18, "answer": ""}
...

There are many empty answers here, I wonder if this is a normal result? And if not, how to solve it?

Best!
Xiaolong

We haven't refactored evaluate.py yet.
You could use the same script / command for finetuning except that you should change evaluate_only in yaml file to True (or revise train function in lavis/runners/runner_base.py)

I will also fix it later

I see, thanks for your kindly reply~