huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Perplexity metric does not apply batching correctly to tokenization

ChengSashankh opened this issue · comments

When I try to evaluate my model's text generation using the perplexity metric, the batch_size parameters in perplexity._compute(..) was not sufficient, because it tries to tokenize and move the entire set of predictions to GPU. A simple change to move the tokenization to each batch fixes the issue for me.

Also, it should be possible to pass my own model and tokenizer (since my model is not publishable on huggingface) to the metric. I have made these changes to enable my experiments.

I have made changes to fix this. I can open a PR to commit these changes, if this sounds good to you. I believe this will benefit the developer community.

I am also facing the same issue.