I just came into contact with the research direction of video inpainting recently. The test sets of Davis and YouTube-VOS only correspond to one mask for each video. How did you use these data sets to conduct the test?
sangruolin opened this issue · comments
sangruolin commented
ruiliu-ai commented
Actually we usually don't use object mask to evaluate the model in the absence of ground truth. We usually randamly generate a sequence of masks and calculate the difference between the output reconstructed video and the original video.
Yuxiang Chen commented
Actually we usually don't use object mask to evaluate the model in the absence of ground truth. We usually randamly generate a sequence of masks and calculate the difference between the output reconstructed video and the original video.
how do you select the random seed for generating test mask? how do you make sure that each paper use the same setting? I can't find the test mask in dataset, looking forward your reply, thanks
Yuxiang Chen commented
any answer?