How can I annotate the Foreground indicator, Boundary offsets, and Saliency score on my own moment retrieval dataset?
tiesanguaixia opened this issue · comments
tiesanguaixia commented
Thank you for the wonderful work! I want to finetune UniVTG on my moment retrieval dataset. I wonder how do authors re-annotate the original json files of moment retrieval datasets like NLQ, Charades-STA, and TACoS to divide a video into clips and give each clip a