it seems a bug in build_cliquenet(pytorch)
zhaoying9105 opened this issue · comments
zhaoying9105 commented
my input size is 1, 3, 128,128
after output = self.pool(output)
, I got an error:
Traceback (most recent call last):
File "D:/projects/machine_learning/gta5/train/clique_train.py", line 176, in <module>
Net()
File "D:/projects/machine_learning/gta5/train/clique_train.py", line 38, in Net
out = model.forward(x)
File "D:\projects\machine_learning\gta5\model\pytorch\cliquenet.py", line 70, in forward
feature_I_list.append(self.list_gb[i](block_feature_I))
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "D:\projects\machine_learning\gta5\model\pytorch\utils.py", line 65, in forward
output = self.pool(output)
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\lib\site-packages\torch\nn\modules\pooling.py", line 547, in forward
self.padding, self.ceil_mode, self.count_include_pad)
RuntimeError: Given input size: (152x32x32). Calculated output size: (152x0x0). Output size is too small at c:\programdata\miniconda3\conda-bld\pytorch_1524549877902\work\aten\src\thcunn\generic/SpatialAveragePooling.cu:63
Joyies commented
I also meet the problem.my input size is 64x3x32x32.
Joyies commented
have you solved the problem?
zhaoying9105 commented
not really, just give up and change to densenet
Yibo Yang commented
Sorry for late reply. Did you resize and crop the images in train/val data loader? If you are using your own dataset, you should pre-process them into the input size required by any classification model you use. As an alternative, you can also modify the model to make it work on your dataset.