Missing model caching in perplexity metrics
daskol opened this issue · comments
Daniel Bershatsky commented
The "official" implementation of perplexity
metric does not cache language model [1]. It seems that metric instance should fetch model and prepare it for further usage in _download_and_prepare
. I suppose that there should be clear API on caching and cache resetting. Also, it is totally unclear how to configure metrics on loading (there is only config_name
but kwargs
are just ignored).
KangkangStu commented
度量的“官方”实现
perplexity
不缓存语言模型[ 1 ]。看来指标实例应该获取模型并准备好在_download_and_prepare
. 我认为应该有关于缓存和缓存重置的明确 API。此外,完全不清楚如何配置加载指标(只有config_name
但kwargs
被忽略)。
+1 ,Do you know now?