kermitt2 / delft

a Deep Learning Framework for Text https://delft.readthedocs.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Average precision/recall/f1 per label

lfoppiano opened this issue · comments

I've noticed that in the evaluation using n-fold crossvalidation, the report provides average precision/recall/f1 globally but not average scores by label.

Could that be useful if I implement it? otherwise I will just compute them manually.

Hi Luca !
Yes that's definitively something missing. It would be a useful addition.

Added with PR #59