EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval

Home Page:https://lmms-lab.github.io/lmms-eval-blog/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

lmms_eval/evaluator.py", line 83, in simple_evaluate assert tasks != [], "No tasks specified, or no tasks found. Please verify the task names." AssertionError: No tasks specified, or no tasks found. Please verify the task names.

lucasjinreal opened this issue · comments

lmms_eval/evaluator.py", line 83, in simple_evaluate
assert tasks != [], "No tasks specified, or no tasks found. Please verify the task names."
AssertionError: No tasks specified, or no tasks found. Please verify the task names.

I dont understand, why?

03-22 20:22:48 [lewisjin/mllm/llms_eval.py:240] INFO Available Tasks:
 - ai2d
 - chartqa
 - cmmmu
 - cmmmu_test
 - cmmmu_val
 - docvqa
 - docvqa_test
 - docvqa_val
 - gqa
 - hallusion_bench_image
 - iconqa
 - iconqa_test
 - iconqa_val
 - infovqa
 - infovqa_test
 - infovqa_val
 - mmbench
 - mmbench_cn
 - mmbench_cn_cc
 - mmbench_cn_dev
 - mmbench_cn_test
 - mmbench_en
 - mmbench_en_dev
 - mmbench_en_test
 - mmmu
 - mmmu_test
 - mmmu_val
 - mmvet
 - multidocvqa
 - multidocvqa_test
 - multidocvqa_val
 - ok_vqa
 - ok_vqa_val2014
 - pope

And how to specific mmbench_cn custom local dataset path?

May I ask what is your command, you should use task names from the task list

For your second question, you may refer to #15 , #23

Hi, looks like even set local envrioment variable, the script still try to read from hf remote, how to fix this issue?

lmms_eval/evaluator.py", line 83, in simple_evaluate assert tasks != [], "No tasks specified, or no tasks found. Please verify the task names." AssertionError: No tasks specified, or no tasks found. Please verify the task names.

have you solved this problem? i met the same when using a machine that doesn't have internet access...

I am move to VLMEvalKit.....

For me, I have a local huggingface dataset builder which can be loaded with load_dataset('/path/to/my/dataset/dataset.py') but throw this error when I put dataset_path: /path/to/my/datasetin the yaml. Step through the debugger, I found the error lies in TaskConfig's post_init method inapi/task.py`.

Remove the line 99-103 solves the problem, which fail on import_module('/path/to/my/dataset/'):

   def __post_init__(self):
        #if self.dataset_path and os.path.exists(os.path.dirname(self.dataset_path)):
        #    import inspect
         #   from importlib import import_module

        #    self.dataset_path = inspect.getfile(import_module(self.dataset_path))

Not sure what's the purpose of this code but thing runs fine if I remove it.