wkentaro / pytorch-fcn

PyTorch Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The weight in deconv doesn`t update?

Lucksong opened this issue · comments

In get_paramers():
def get_parameters(model, bias=False):
import torch.nn as nn
modules_skipped = (
nn.ReLU,
nn.MaxPool2d,
nn.Dropout2d,
nn.Sequential,
torchfcn.models.FCN32s,
torchfcn.models.FCN16s,
torchfcn.models.FCN8s,
)
for m in model.modules():
if isinstance(m, nn.Conv2d):
if bias:
yield m.bias
else:
yield m.weight
elif isinstance(m, nn.ConvTranspose2d):

weight is frozen because it is just a bilinear upsampling

        if bias:
            assert m.bias is None
    elif isinstance(m, modules_skipped):
        continue
    else:
        raise ValueError('Unexpected module: %s' % str(m))

Yeah, it's frozen.

But why?

According to the authors, they said it didn't change the final result very much. That was probably because deconvolution layers are difficult to train in general (compared to standard convolutional layer).