MrGiovanni / UNetPlusPlus

[IEEE TMI] Official Implementation for UNet++

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The detail of paper

mitseng opened this issue · comments

commented
The detail of paper
commented

In Figer 3 in the TMI version, there are IOU that greater than dice, e.g. the very first, where IOU = 0.9082 and dice=0.9061. Technically, IOU should NOT be greater than dice. Is this a mistake or your definition of metrics differs from the general idea, that you didn't mention in the paper?

Hi @mitseng

Thanks for your question.

As shown in the footnote on page 1860 in our paper, the IoU score reported in the paper was calculated by mIoU, in which the prediction has been thresholded from 0.5 to 1.

https://github.com/MrGiovanni/UNetPlusPlus/blob/master/keras/helper_functions.py#L25
is the implementation of the mIoU metric.

We used this mIoU because, back then, we were working on the DSB 2018 (nuclei) competition, which used this metric for evaluation; and we used this mIoU for all other applications for consistency.

Reference: https://www.kaggle.com/c/data-science-bowl-2018/overview/evaluation
The metric sweeps over a range of IoU thresholds, at each point calculating an average precision value. The threshold values range from 0.5 to 0.95 with a step size of 0.05: (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95). In other words, at a threshold of 0.5, a predicted object is considered a "hit" if its intersection over union with a ground truth object is greater than 0.5.

Thanks,

Zongwei