huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Missing model caching in perplexity metrics

daskol opened this issue · comments

The "official" implementation of perplexity metric does not cache language model [1]. It seems that metric instance should fetch model and prepare it for further usage in _download_and_prepare. I suppose that there should be clear API on caching and cache resetting. Also, it is totally unclear how to configure metrics on loading (there is only config_name but kwargs are just ignored).