chenyilun95 / tf-cpn

Cascaded Pyramid Network for Multi-Person Pose Estimation (CVPR 2018)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

evaluation results

ouceduxzk opened this issue · comments

First of all, thanks for sharing the work. I quickly run a test of AP with following results, do you know why it is too low?

python3 models/COCO.res50.256x192.CPN/mptest.py -d 0-1 -r 350
loading annotations into memory...
Done (t=2.09s)
creating index...
index created!
loading the precalcuated json files
Loading and preparing results...
4581
4581
DONE (t=2.98s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type keypoints
there are 40504 unique images
DONE (t=14.41s).
Accumulating evaluation results...
DONE (t=0.53s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.093
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.116
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.102
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.089
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.099
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.097
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.117
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.104
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.092
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.103
AP50
ap50 is 0.141489
ap is 0.099431

I added the AP calculation and saved the json file already

Your result is similar to #11 . I think there is something wrong while testing. I found your testing images are
40504 unique images. We test the results on the COCO minival dataset which contains 5000 images and so as the provided detection boxes. You might get the wrong human detection in your dataset.

Thanks for your quick reply, you are right that my val json files is not the same, I am using the person_keypoints_val2014.json , can you provide those json files, in coco dataset official website, they are not existing anymore

COCO 2014 minival json and its detection result json is provided.

Thanks, now it looks normal

DONE (t=0.37s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.697
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.883
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.770
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.662
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.761
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.764
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.927
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.823
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.715
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.830