sgugger / Deep-Learning

A few notebooks about deep learning in pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

DeepPainterlyHarmonization notebook error: IndexError: too many indices for tensor of dimension 1

lishali opened this issue · comments

commented

Hi Sylvain, thanks for the awesome exposition in your DeepPainterlyHarmonization notebook. I am however running into an issue during the second training phase. Not changing anything, was able to reproduce the code up until In[84].
With the following warning:

IndexError: too many indices for tensor of dimension 1

Here's the detailed warning:


IndexError Traceback (most recent call last)
in ()
1 n_iter=0
----> 2 while n_iter <= max_iter: optimizer.step(partial(step,final_loss))

~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/optim/lbfgs.py in step(self, closure)
101
102 # evaluate initial f(x) and df/dx
--> 103 orig_loss = closure()
104 loss = float(orig_loss)
105 current_evals = 1

in step(loss_fn)
2 global n_iter
3 optimizer.zero_grad()
----> 4 loss = loss_fn(opt_img_v)
5 loss.backward()
6 n_iter += 1

in final_loss(opt_img_v)
4 c_loss = content_loss(out_ftrs[-1])
5 s_loss = style_loss(out_ftrs)
----> 6 h_loss = hist_loss([out_ftrs[0], out_ftrs[3]])
7 t_loss = tv_loss(opt_img_v[0])
8 return c_loss + w_s * s_loss + w_h * h_loss + w_tv * t_loss

in hist_loss(out_ftrs)
6 mask = V(torch.Tensor(mf).contiguous().view(1, -1), requires_grad=False)
7 of_masked = of * mask
----> 8 of_masked = torch.cat([of_masked[i][mask>=0.1].unsqueeze(0) for i in range(of_masked.size(0))])
9 loss += F.mse_loss(of_masked, V(remap_hist(of_masked, sh), requires_grad=False))
10 return loss / 2

in (.0)
6 mask = V(torch.Tensor(mf).contiguous().view(1, -1), requires_grad=False)
7 of_masked = of * mask
----> 8 of_masked = torch.cat([of_masked[i][mask>=0.1].unsqueeze(0) for i in range(of_masked.size(0))])
9 loss += F.mse_loss(of_masked, V(remap_hist(of_masked, sh), requires_grad=False))
10 return loss / 2

IndexError: too many indices for tensor of dimension 1

The expression "mask>=0.1" returns a tensor of size (1,N), hence the error. Squeeze out the 0th dimension to make it work.
Replace that line with the following line:
of_masked = torch.cat([of_masked[i][(mask>=0.1).squeeze(0)].unsqueeze(0) for i in range(of_masked.size(0))])

There was another issue i came across in the function "remap_hist", the following line:
idx = (cum_ref.unsqueeze(1) - rng.unsqueeze(2) < 0).sum(2).long()

It throws an error due to data type mismatch. Typecast tensor 'rng' into a float tensor.
Add the following before the above line:
rng = rng.type(torch.cuda.FloatTensor);

@mailcorahul I am getting the following error,Can you please help me with it

/usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))

RuntimeError Traceback (most recent call last)
in ()
1 n_iter=0
----> 2 while n_iter <= max_iter: optimizer.step(partial(step,final_loss))

4 frames
in remap_hist(x, hist_ref)
13 ratio = ratio.squeeze().clamp(0,1)
14 new_x = ymin + (ratio + idx.float()) * step
---> 15 new_x[:,-1] = ymax
16 _, remap = sort_idx.sort()
17 new_x = select_idx(new_x,idx)

RuntimeError: expand(torch.cuda.FloatTensor{[64, 1]}, size=[64]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

This looks like the tensor on which expand is called has a shape of (64, 1), but trying to expand it to size (64), hence the error. From the code you posted I am not able to find out which tensor variable it is, but you can try calling .squeeze() on that tensor(before expand). It should solve it.

@mailcorahul im getting error when i try to call the below function

def remap_hist(x,hist_ref):
ch, n = x.size()
sorted_x, sort_idx = x.data.sort(1)
ymin, ymax = x.data.min(1)[0].unsqueeze(1), x.data.max(1)[0].unsqueeze(1)
hist = hist_ref * n/hist_ref.sum(1).unsqueeze(1)#Normalization between the different lengths of masks.
cum_ref = hist.cumsum(1)
cum_prev = torch.cat([torch.zeros(ch,1).cuda(), cum_ref[:,:-1]],1)
step = (ymax-ymin)/n_bins
rng = torch.arange(1,n+1).unsqueeze(0).cuda()
rng = rng.type(torch.cuda.FloatTensor)
idx = (cum_ref.unsqueeze(1) - rng.unsqueeze(2) < 0).sum(2).long()
ratio = (rng - select_idx(cum_prev,idx)) / (1e-8 + select_idx(hist,idx))
ratio = ratio.squeeze().clamp(0,1)
new_x = ymin + (ratio + idx.float()) * step
new_x[:,-1] = ymax
_, remap = sort_idx.sort()
new_x = select_idx(new_x,idx)
return new_x

@mailcorahul Hey even I got the same error as @changchethu has faced.Is there any possiblity that u could provide the solution asap..Please..

@disharameshh this error is related to tensor shape, please see if my above comment helps.

@mailcorahul I went through your above comment......but still I couldn't figure out the piece of code causing the error.....can you be more specific please....can you please tell what part of the code to be changed

can you post me the complete exception traceback? i need to know which code statement is throwing error.

@mailcorahul
/usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
warnings.warn(warning.format(ret))
RuntimeError Traceback (most recent call last)
in ()
1 n_iter=0
----> 2 while n_iter <= max_iter: optimizer.step(partial(step,final_loss))

4 frames
in remap_hist(x, hist_ref)
13 ratio = ratio.squeeze().clamp(0,1)
14 new_x = ymin + (ratio + idx.float()) * step
---> 15 new_x[:,-1] = ymax
16 _, remap = sort_idx.sort()
17 new_x = select_idx(new_x,idx)

RuntimeError: expand(torch.cuda.FloatTensor{[64, 1]}, size=[64]): the number of sizes provided (1) must be greater or equal to the number of dimensions in the tensor (2)

It's directing to this function

def remap_hist(x,hist_ref):
ch, n = x.size()
sorted_x, sort_idx = x.data.sort(1)
ymin, ymax = x.data.min(1)[0].unsqueeze(1), x.data.max(1)[0].unsqueeze(1)
hist = hist_ref * n/hist_ref.sum(1).unsqueeze(1)#Normalization between the different lengths of masks.
cum_ref = hist.cumsum(1)
cum_prev = torch.cat([torch.zeros(ch,1).cuda(), cum_ref[:,:-1]],1)
step = (ymax-ymin)/n_bins
rng = torch.arange(1,n+1).unsqueeze(0).cuda()
rng = rng.type(torch.cuda.FloatTensor)
idx = (cum_ref.unsqueeze(1) - rng.unsqueeze(2) < 0).sum(2).long()
ratio = (rng - select_idx(cum_prev,idx)) / (1e-8 + select_idx(hist,idx))
ratio = ratio.squeeze().clamp(0,1)
new_x = ymin + (ratio + idx.float()) * step
new_x[:,-1] = ymax
_, remap = sort_idx.sort()
new_x = select_idx(new_x,idx)
return new_x

Can anyone please help me with this issue

Can you print the shapes of new_x and ymax before the line "new_x[:,-1] = ymax"?
Post the shapes of both tensors here.

@disharameshh in remap_dist function, replace new_x[:,-1] = ymax withnew_x[:,-1] = ymax.squeeze()