yalesong / pvse

Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval (CVPR 2019)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

IndexError in trainning MRW

zyyll opened this issue · comments

commented

thank you for your awesome work! when train the MRW dataset , I encounter the following problem:
File "train.py", line 236, in
main()
File "train.py", line 216, in main
loss = train(epoch, trn_loader, model, criterion, optimizer, args)
File "train.py", line 68, in train
for itr, data in enumerate(data_loader):
File "/root/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 560, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/root/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 560, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/root/CV/pvse/data.py", line 230, in getitem
video = self.transform(frames)
File "/root/CV/pvse/video_transforms.py", line 58, in call
img = t(img)
File "/root/CV/pvse/video_transforms.py", line 588, in call
i, j, h, w = self.get_params(img_list[0], self.scale, self.ratio)
IndexError: list index out of range
How to fix it?thank you for your attention

It seems frames in video = self.transform(frames) is empty, which suggests that the gulp data is corrupted. Did you download the videos and gulped them by yourself, or did you download the preprocessed gulp files that we provide?

commented

yes,I have download the preprocessed gulp files that you provide,the problem didn`t occur in the first but after several batch. The error prompt is following:

Namespace(batch_size=128, batch_size_eval=16, ckpt='', cnn_type='resnet152', crop_size=224, data_name='mrw', data_path='/root/CV/pvse/data/', debug=False, div_weight=0.0, dropout=0.0, embed_size=1024, eval_on_gpu=False, grad_clip=2.0, img_attention=True, img_finetune=False, log_file='/root/CV/pvse/logs/logX.log', log_step=10, logger_name='/root/CV/pvse/runs/runX', lr=0.0002, margin=0.1, max_video_length=4, max_violation=False, mmd_weight=0.0, num_embeds=1, num_epochs=30, order=False, txt_attention=True, txt_finetune=False, val_metric='rsum', vocab_path='/root/CV/pvse/vocab/', weight_decay=0.0, wemb_type=None, word_dim=300, workers=0)
2019-12-06 01:05:21 INFO [0][ 10/345] loss: 0.1981 (0.1983), ranking: 0.1981, (0.1983)
2019-12-06 01:07:06 INFO [0][ 20/345] loss: 0.1971 (0.1982), ranking: 0.1971, (0.1982)
2019-12-06 01:08:49 INFO [0][ 30/345] loss: 0.1979 (0.1981), ranking: 0.1979, (0.1981)
2019-12-06 01:10:34 INFO [0][ 40/345] loss: 0.1974 (0.1978), ranking: 0.1974, (0.1978)
Traceback (most recent call last):
File "train.py", line 236, in
main()
File "train.py", line 216, in main
loss = train(epoch, trn_loader, model, criterion, optimizer, args)
File "train.py", line 68, in train
for itr, data in enumerate(data_loader):
File "/root/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 560, in next
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/root/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 560, in
batch = self.collate_fn([self.dataset[i] for i in indices])
File "/root/CV/pvse/data.py", line 230, in getitem
video = self.transform(frames)
File "/root/CV/pvse/video_transforms.py", line 58, in call
img = t(img)
File "/root/CV/pvse/video_transforms.py", line 588, in call
i, j, h, w = self.get_params(img_list[0], self.scale, self.ratio)
IndexError: list index out of range
thank you for your attention!

Sorry for the delay on this. Are you still having the issue?

Closing due to inactivity.