ajabri / videowalk

Repository for "Space-Time Correspondence as a Contrastive Random Walk" (NeurIPS 2020)

Home Page:http://ajabri.github.io/videowalk

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to use ILSVRC-VID dataset to train ?

SimJJ96 opened this issue · comments

Hi, thanks for sharing your work! I was wondering if it is possible to train the model on ILSVRC-VID or YTB-VOS dataset instead?

I have tried creating a dataset ILSVRC and YTB-VOS dataset that returns a Tensor[F, H, W, C], where F is the number of frames of the image without transformation. However, after passing through the train transformation, it returns a tuple instead.

This tuple in turn gave me an error at train.py under train_one_epoch, video = video.to(device) list object has no attribute to. How can I rectify this issue? Thanks.

@SimJJ96 , Hi, have you successfully trained w/ YTB-VOS dataset? I tried this dataset but got very low VOS performance on DAVIS17.