mapillary / mapillary_vistas

MVD Evaluation Scripts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Which is the ultimate metric for the Semantic image segmentation track?

lopuhin opened this issue · comments

The script gives several averages: first the all class average (show on confusion_matrix.png), then an average over groups (shown on confusion_matrix_meta_2.png) and finally an average over less groups (shown on confusion_matrix_meta_1.png). Which of these 3 averages is the most important, which should we optimize for?

The final metric for image segmentation is the average IoU over all evaluated categories. It is showed on confusion_matrix.png from now on.
Due to a bug, it averaged over all categories until yesterday, so please ensure that you are on the latest version before optimizing

Great, thanks for clarification!