Eval metrics per class
MahmoudAliEng opened this issue · comments
Mahmoud1Ali3ng commented
My dataset does not contain the famous entity classes PER, ORG, ... it contains instead nine other classes [NE001-NE009]
but I did not get a detailed metric report that contains each class accuracy, recall and f-measure when running nerTagger.py
either with train_eval
or eval
.
PS: I used --fold-count 1
or not specified at all.
How can I show them like this, for example :
Evaluation on test set:
f1 (micro): 91.35
precision recall f1-score supportNE001 0.8795 0.9007 0.8899 1661 NE002 0.9647 0.9623 0.9635 1617 ............ ............ ............ ............ ......... NE009 0.9260 0.9305 0.9282 1668 avg / total 0.9109 0.9161 0.9135 5648
Mahmoud1Ali3ng commented
I just run the eval
for more fold-count > 1