huggingface / evaluate

🤗 Evaluate: A library for easily evaluating machine learning models and datasets.

Home Page:https://huggingface.co/docs/evaluate

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

combined metrics in evaluators/Suite subtasks

Vipitis opened this issue · comments

commented

I see that custom evaluators are already discussed in #367, however I don't see a proper way to have multiple metrics for a SubTask and it's evaluator. Apart from writing a custom metric that includes both metrics. Adding the Subtask twice with distinct metrics would take twice as long as all predictions have to be done again.

Since CombinedEvaluations is not an EvaluationModule you can't pass it to an Evaluator. So a lazy solution I came up with is to always do combine() since it should work with a single metric too.

Is there any smarter ways to do this or should I try myself with a PR?