EleutherAI / lm-evaluation-harness

A framework for few-shot evaluation of language models.

Home Page:https://www.eleuther.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error when limit is not specified (possibly issue with requirements?)

hammoudhasan opened this issue · comments

Hello guys! Thanks for the great work.

I was trying to run a fresh install of lm harness (I've used this library in the past before the newer updates). For testing purposes I'm using

accelerate launch  -m lm_eval --model hf --model_args pretrained=EleutherAI/pythia-160m --tasks arc_challenge --output_path ./results --batch_size 8

A simple evaluation on pythia 160M on arc_challenge running this generates an error

  File "/home/username/lm-evaluation-harness/lm_eval/evaluator.py", line 598, in <dictcomp>
    "effective": min(limit, len(task_output.task.eval_docs)),

Looking into this it seems this stems from not passing limit (hence limit = None).

Is there something required to solve this ? Could this stem from the current requirements.txt file given that I did a fresh conda environment ?

Edit:
A possible fix could be setting the default value to 1.0 maybe ?

This is from a recent change (#1766). #1785 will fix it.