huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

is it possible to evaluate a proprietary model without uploading it to HF

sermolin opened this issue · comments

I really like HF 'evaluate'.
However, if my customer is not allowed to publicly share his/her model, how can they still use HF 'evaluate' module?
In the code, I see 'model_id' always referencing a model from a public HF model hub (eg. "lvwerra/distilbert-imdb").
Is there a way to implement a Bring-Your-Own-Model capability?

When you use evaluate the model remains local - you don't need to upload it! Most modules just take the predictions that you can generate wherever you like!

Hello!

I am hoping to compute perplexity of my locally trained/updated GPT2 model.
I have my model saved as a pytorch_model.bin.

How can I provide path to my local model in perplexity.compute?
perplexity = load("perplexity", module_type="metric")
perplexity.compute(predictions=texts, model_id='gpt2')['perplexities']

I looked into this commit.
The commit installation doesn't seem to work with
pip install git+https://github.com/huggingface/evaluate@95d16d913d8245780a2f7e4e1ec0ecd5b5358d00

does the following work?

perplexity.compute(predictions=texts, model_id='PATH_TO_MY_MODEL')

does the following work?

perplexity.compute(predictions=texts, model_id='PATH_TO_MY_MODEL')

maybe not work. I tested your idea, but i got a error
"huggingface_hub.utils.validators.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'E:\llama2.c\model_hf\pytorch_model.bin'."

你好!

我希望计算本地训练/更新的 GPT2 模型的困惑度。 我将模型保存为 pytorch_model.bin。

如何在 perplexity.compute 中提供本地模型的路径? perplexity = load("perplexity", module_type="metric") perplexity.compute(predictions=texts, model_id='gpt2')['perplexities']

我调查了这个提交。 提交安装似乎不适用于 pip install git+https://github.com/huggingface/evaluate@95d16d913d8245780a2f7e4e1ec0ecd5b5358d00

Hello please have you solved this problem?

does the following work?

perplexity.compute(predictions=texts, model_id='PATH_TO_MY_MODEL')

This works, thanks!
I was previously trying to provide path to a model.pt saved with torch.
When I provide path to the directory containing model saved with trl's native saving functions, e.g., trainer._save_pretrained(logging_dir) or trainer.save_model(logging_dir), it works.

以下有效吗?

perplexity.compute(predictions=texts, model_id='PATH_TO_MY_MODEL')

这有效,谢谢! 我之前尝试提供用 torch 保存的 model.pt 的路径。 当我提供包含使用 trl 的本机保存函数保存的模型的目录路径时,例如 trainer._save_pretrained(logging_dir) 或 trainer.save_model(logging_dir),它可以工作。

Please post a simple demo of the explanation?I'm a rookie about evaluate, thank you very much !