huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Shouldn't perplexity range from [1 to inf)?

ivanmkc opened this issue · comments

The range of this metric is [0, inf). A lower score is better.

perplexity = e**(sum(losses) / num_tokenized_tokens)

If sum(losses) = 0, then perplexity = 1.

@ivanmkc
Perplexity ranges between zero and inf because the exponent can be negative (The sum of negative log likelihoods).
Check out the following blog post for a better understanding.
Perplexity of fixed-length models.

Thanks, will take a look.