About evaluation of the model
Limingxing00 opened this issue · comments
Hi,
thank you for the nice work.
I have a concern about the evaluation of the model. Because there is no validation set to pick the best model. It may has a potential overfitting problem. (Or what should the validation set for interactive segmentation look like? If there is a unified standard, it will be more helpful for everyone to compare their methods.)
In interactive object segmentation setting, is this setting popular? I am new here for the interactive segmentation. Wish to solve my concern, thank you.
This repo is an integral (and minor) part of MiVOS so I didn't do much evaluation. You can take a look at more mainstream interactive segmentation methods like f-BRS: https://github.com/saic-vul/fbrs_interactive_segmentation -- they mostly use synthetic user input and/or user study for evaluation.
Thank you for your quick reply!