RuntimeError: Given input size: (576x27x27). Calculated output size: (576x-1x-1). Output size is too small
Lily1992 opened this issue · comments
I got the error when i use the model of mobilenetv3_small to train,and i used my own dataset like as VOC. Would you give me some advice?
And,would you tell me the torch version and torchvision version?
I'm having the same issue. Were you able to overcome that?
Maybe you need to modify here, mobilenetv3_seg.py script
around 60 line
class _LRASPP(nn.Module):
"""Lite R-ASPP"""
def __init__(self, in_channels, norm_layer, **kwargs):
super(_LRASPP, self).__init__()
out_channels = 128
self.b0 = nn.Sequential(
nn.Conv2d(in_channels, out_channels, 1, bias=False),
norm_layer(out_channels),
nn.ReLU(True)
)
self.b1 = nn.Sequential(
nn.AdaptiveAvgPool2d((8, 8)),
# nn.AvgPool2d(kernel_size=(49, 49), stride=(16, 20)), # check it
nn.Conv2d(in_channels, out_channels, 1, bias=False),
nn.Sigmoid(),
)
I encountered same error and have no idea why and modify it according @raysue then it works.
When i exported onnx model, onnx cannot surpport adaptive_avg_pool2d.How can i fix the problem?
When i exported onnx model, onnx cannot surpport adaptive_avg_pool2d.How can i fix the problem?
upgrade your onnx version, use opset=13 or fix your input size use nn.AvgPool2d with suitable parameters : kernel_size and stride