hkchengrex / MiVOS

[CVPR 2021] Modular Interactive Video Object Segmentation: Interaction-to-Mask, Propagation and Difference-Aware Fusion. Semi-supervised VOS as well!

Home Page:https://hkchengrex.com/MiVOS/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some problem when train Fusion

nazimii opened this issue · comments

Hello, I encountered some problems when retraining the fusion model. Some key parameter guidelines for training fusion are not given in the code warehouse. Can you provide it?
Specifically as follows:
(1) generate_fusion.py: parameter "separation" not given

Can you provide the relevant parameter descriptions of fusion training and the instructions to run so that I can reproduce the results of your paper?

and when I try to train(python train.py),I meet some code mistake in fusion_dataset.py:
(1)are there some mistake When you assign a value to self.vid_to_instance? and It will return error at:
self.videos = [v for v in self.videos if v in self.vid_to_instance](line 60 in fusion_datast.py)

We provided the pre-computed fusion data for download. The names of the folders should reflect the parameters I used. Like sep30 means separation=30, m20 means mem_freq=20. I also haven't generated the fusion data for the entire dataset.

My current L60 in fusion_dataset is different. I did a brief scan and couldn't find the problem.

the problem when train fusion(train.py),is :

Traceback (most recent call last):
File "train.py", line 103, in
start_epoch = total_iter//len(train_loader)
ZeroDivisionError: integer division or modulo by zero

so there must be something wrong at:

self.videos = [v for v in self.videos if v in self.vid_to_instance](line 52 in fusion_datast.py)

and line 42~48 when you assign value to self.vid_to_instance

So your self.vid_to_instance is empty? It doesn't happen to me.

Really? I have to change the code (fusion_dataset.py, line 50) to :
(fusion_dataset.py, line 50)
for folder in fuse_list:
# folder_path = path.join(self.fd_root, folder)
# video_list = sorted(os.listdir(folder_path)) # ['00000', '00020', '00040', '00060']
# print("video list", video_list)
# video level - different videos
# for vid in video_list:
video_path = path.join(self.fd_root, folder) #, vid)
self.vid_to_instance[folder].append(video_path)
total_fuse_vid += 1

so can I train the fusion model.
meybe the code at github is different from your's ?

In my code the data in self.video is different from self.vid_to_instance.

so it comes error at fusion_dataset.py :
self.videos = [v for v in self.videos if v in self.vid_to_instance]

are you use YV when train the fusion model? how many dataset you used to trian fusion model?

It seems like your folder structure is different, i.e., does not have the "run"-level. I used the uploaded "fusion_data" (a subset of BL30K + DAVIS) for training.

hi, I have a new question:

can you give me the code that used to eval YTB dataset? (multi reference)

so, if I use the fusion_data you provide,I can get the result the same as paper?