mapillary / mapillary_vistas

MVD Evaluation Scripts

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Evaluate the unlabeled pixels or not?

EthanZhangYi opened this issue · comments

Hi, I evaluated a model trained for Semantic Segmentation Task with codes of this project. And I find that the unlabeled pixels are also evaluated.

    # add up confusion matrix
    confusion_matrix, _ = np.histogramdd(
        replace_indices,
        bins=(nr_labels, nr_labels),
        range=[(0, nr_labels), (0, nr_labels)]
    )

where nr_labels = 66
However according to the config.json, the unlabeled pixels should be ignored for evaluation.

It seems to be confusing.
Looking forward to a reply!

Hi,

The confusion matrix is first computed on all labels independently of their evaluation flag.
Before calculating the scores the respective entries are removed from the confusion matrix.
See https://github.com/mapillary/mapillary_vistas/blob/master/mapillary_vistas/evaluation/confusion_matrix.py#L108 for details

I hope this clarifies the evaluation a bit.

@tobias-o Thanks for your reply.
The confusion_matrix not the reduced_confusion_matrix is showed in the output image. https://github.com/mapillary/mapillary_vistas/blob/master/mapillary_vistas/evaluation/evaluation.py#L236
and I misunderstand the output image.

So actually the mean IoU from reduced_confusion_matrix is used for the final evaluation of the Semantic Image Segmentation Task? or mean IoU from confusion_matrix?

Thanks

This was a typo, thanks for filing the issue. It is fixed now.

You can specify --print-full-confusion-matrix to plot/print all (including non-evaluation) labels. The default is to only use labels with the evaluate flag set to true.