SamsungLabs / fbrs_interactive_segmentation

[CVPR2020] f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation https://arxiv.org/abs/2001.10331

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to generate sbd_samples_weights.pkl

gongl-cn opened this issue · comments

I want to train another dataset,so I need to generate .pkl file myself(Third parameter represent ?)

I want to ask the same question.

This pickle file contains average train losses for each sample in SBD. To obtain it we ran a trained model with frozen weights for 10 epoches with all augmentations to collect loss statistics. It is some sort of hard-negative mining for the whole dataset. If you use that file, "hard" samples with higher average losses will be sampled with an increased probabilities, rather than from uniform distribution. Our later internal experiments had shown that it didn't provide significant improvements on other datasets and we trained models on LVIS+COCO without that trick. Unfortunately, we don't have a separate script for that procedure as it was written in a "dirty" mode by modifying the existing train code in a temporary branch.

In general, there is no need to generate a pkl file for a new dataset, just use samples_scores_path=None and it will hardly affect the performance on your dataset.

If I wanted to train this.pkl file, would I just have to freeze some of the parameters?

Simply put, yes, you need to freeze all model parameters and collect loss statistic for several epoches.

ok,thank you very much!