High MIoU of VOC2012 on train2_iter_20000.caffemodel
LucasBoTang opened this issue · comments
When I tried to get the test scores of PASCAL VOC2012 on Deeplab v2 with Resnet101. The scores from train2_iter_20000.caffemodel are somehow wired:
"Frequency Weighted IoU": 0.9649676213079842,
"Mean Accuracy": 0.9353094054537866,
"Mean IoU": 0.9088821693273592,
"Pixel Accuracy": 0.9819734262014723
The scores from train1_iter_20000.caffemodel is reasonable, e.g. 0.7642 MIOU before CRF. But why I got such higher scores on train2_iter_20000.caffemodel?
Do you mean validation scores?
train_iter_20000.caffemodel
is trained ontrain
+aug
train2_iter_20000.caffemodel
is fine-tuned ontrain
+val
+aug
Here is the training script of the official Caffe implementation.
https://ucla.app.box.com/s/4grlj8yoodv95936uybukjh5m0tdzvrf/file/55052614302
The model names are defined in {solver|solver2}.prototxt
downloaded under the data
dir.
Yes, the validation scores. Sorry for the confusion. So the fine-tuned model (train2_iter_20000.caffemodel) which is trained on train + val + aug can actually get the 0.91 MIoU on validation set?
Yes. The fine-tuned model should be evaluated on the isolated test set.
Thanks.