EleutherAI / lm-evaluation-harness

A framework for few-shot evaluation of language models.

Home Page:https://www.eleuther.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Add tasks for performance on long context lengths

nairbv opened this issue · comments

There are a couple of papers I see with benchmarks for really long context lengths that don't seem to be available in lm-evaluation-harness. It would be great to have one of these or something similar for measuring ability to extract information from long context windows, important for RAG.

Needle-in-a-haystack might also be a nice-to-have though I think more difficult / "natural" long-context evals should be prioritized.