showlab / UniVTG

[ICCV2023] UniVTG: Towards Unified Video-Language Temporal Grounding

Home Page:https://arxiv.org/abs/2307.16715

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How can I annotate the Foreground indicator, Boundary offsets, and Saliency score on my own moment retrieval dataset?

tiesanguaixia opened this issue · comments

Thank you for the wonderful work! I want to finetune UniVTG on my moment retrieval dataset. I wonder how do authors re-annotate the original json files of moment retrieval datasets like NLQ, Charades-STA, and TACoS to divide a video into clips and give each clip a $f_{i}$?