huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Missing model caching in perplexity metrics

daskol opened this issue · comments

The "official" implementation of perplexity metric does not cache language model [1]. It seems that metric instance should fetch model and prepare it for further usage in _download_and_prepare. I suppose that there should be clear API on caching and cache resetting. Also, it is totally unclear how to configure metrics on loading (there is only config_name but kwargs are just ignored).

度量的“官方”实现perplexity不缓存语言模型[ 1 ]。看来指标实例应该获取模型并准备好在_download_and_prepare. 我认为应该有关于缓存和缓存重置的明确 API。此外,完全不清楚如何配置加载指标(只有config_namekwargs被忽略)。

+1 ,Do you know now?