ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 32, 1, 1])
571502680 opened this issue · comments
How does it solve?
Hi, What's the size of the input image?
Hi, What's the size of the input image?
4806403,Does segmentation have relation with the size of input image?
Segmentation task don't have relation with the size of input image, but the spatial resolution of feature map will be 1/8. Can the size be written as weight * width?
Please give a detailed error log.
Please ensure batch_size > 1
.
Segmentation task don't have relation with the size of input image, but the spatial resolution of feature map will be 1/8. Can the size be written as weight * width?
Please give a detailed error log.
When I use the cityspace dataset, it occurs to this error.and batch_size =32
Hi, is there a more detailed error log?
Hi, is there a more detailed error log?
Nothing,i delete BatchNorm and it is ok.
BatchNorm is important , maybe this will affect performance.
-_-! Does BatchNorm have connection with Batchsize?it doesn't work if not delete BN
When classifiying,is the background a class?
Namespace(aux=False, aux_weight=0.4, base_size=520, batch_size=1, crop_size=224, dataset='citys', device=device(type='cpu'), epochs=160, eval=False, lr=0.01, model='fast_scnn', momentum=0.9, no_val=True, resume=None, save_folder='./weights', start_epoch=0, train_split='train', weight_decay=0.0001)
Found 2975 images in the folder ./datasets/citys/leftImg8bit/train
Found 500 images in the folder ./datasets/citys/leftImg8bit/val
w/ class balance
Starting Epoch: 0, Total Epochs: 160
Traceback (most recent call last):
File "/home/Fast-SCNN-pytorch/train.py", line 200, in <module>
trainer.train()
File "/home/Fast-SCNN-pytorch/train.py", line 135, in train
outputs = self.model(images)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/Fast-SCNN-pytorch/models/fast_scnn.py", line 36, in forward
x = self.global_feature_extractor(higher_res_features)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/Fast-SCNN-pytorch/models/fast_scnn.py", line 186, in forward
x = self.ppm(x)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/Fast-SCNN-pytorch/models/fast_scnn.py", line 139, in forward
feat1 = self.upsample(self.conv1(self.pool(x, 1)), size)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/Fast-SCNN-pytorch/models/fast_scnn.py", line 61, in forward
return self.conv(x)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 83, in forward
exponential_average_factor, self.eps)
File "/home/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/torch/nn/functional.py", line 1693, in batch_norm
raise ValueError('Expected more than 1 value per channel when training, got input size {}'.format(size))
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 32, 1, 1])
- background is class 0.
- BatchNorm has connection with batch size, when the model in train() mode, the batch_size cann't be set 1.
If I setbatch_size=1
, it will appear bug above, ifbatch_size = 4
or others (8, 16,...), there is no bug.
However I set batch_size = 4, it is amazing!
I will fix bug when batch_size=32
as soon as possible.
OK,Thank you very much!
There is one solution . If you encounter a batch_size = 1 , then simply add a dummy tensor of Zeros . And before computing loss simply eliminate that tensor so its not affecting gradients. But this would require a computational overhead.