NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Input and target shapes not matching

tharindu-mathew opened this issue · comments

I'm trying to train a custom dataset, and I'm running into this issue. I'm simply trying to match rgb image sequence to rgb image sequence.

Command:
python train.py --name p_256_g1
--dataroot datasets/custom/ --dataset_mode temporal
--input_nc 3 --loadSize 256
--max_frames_per_gpu 2 --n_frames_total 6 --gpu_ids 1,2,3,4
--n_downsample_G 2 --num_D 1
--no_first_img

Error:
Traceback (most recent call last):
File "train.py", line 329, in
train()
File "train.py", line 117, in train
losses = modelD(0, reshape([real_B, fake_B, fake_B_raw, real_A, real_B_prev, fake_B_prev, flow, weight, flow_ref, conf_ref]))
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 114, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 124, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply
raise output
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 41, in _worker
output = module(*input, **kwargs)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/scratch2/mathewc/vid2vid/models/vid2vid_model_D.py", line 184, in forward
loss_G_VGG = (self.criterionVGG(fake_B, real_B) * lambda_feat) if not self.opt.no_vgg else torch.zeros_like(loss_W)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/scratch2/mathewc/vid2vid/models/networks.py", line 756, in forward
loss += self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach())
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in call
result = self.forward(*input, **kwargs)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 85, in forward
reduce=self.reduce)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/functional.py", line 1558, in l1_loss
input, target, size_average, reduce)
File "/scratch2/mathewc/anaconda3/envs/vid2vid/lib/python3.6/site-packages/torch/nn/functional.py", line 1537, in _pointwise_loss
return lambd_optimized(input, target, size_average, reduce)
RuntimeError: input and target shapes do not match: input [1 x 64 x 128 x 256], target [2 x 64 x 128 x 256] at /opt/conda/conda-bld/pytorch_1524590031827/work/aten/src/THCUNN/generic/AbsCriterion.cu:15