dbolya / tide

A General Toolbox for Identifying Object Detection Errors

Home Page:https://dbolya.github.io/tide

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Per class recall

aorad opened this issue · comments

commented

Do you plan on supporting per class metrics?

Support already exists, but is buried a little deeper in the code.

However, recall will not be supported. Just use COCOEval for that.

commented

How can we access the per class metric? @dbolya

@kdk2612

run = tide.evaluate(gt, results)

for class_id, ap_data in run.ap_data.objs.items():
    print('{:10s}: {:.2f}'.format(str(class_id), ap_data.get_ap()))

This will print out the AP for each class ID individually for the given run.

commented

I am getting different results using this vs the pyCoco tools. Is there a reason for that?

The code I showed gives you the per-class mAP @ 50. the COCO evaluation toolkit probably gives you @ 50:95.

commented

No not using the code, using the entire project.

commented

the mAP values for the evaluation are different

Can you create a new issue for this with what TIDE outputs vs. what pycocotools outputs?