gjy3035 / GCC-SFCN

This is the official code of spatial FCN in the paper Learning from Synthetic Data for Crowd Counting in the Wild [CVPR2019].

Home Page:https://gjy3035.github.io/GCC-CL/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it possible to release vgg network pretrained on GCC?

xialeiliu opened this issue · comments

I would like to do a quick experiment with vgg network, I would appreciate if you could help on it.

vgg network? you mean sfcn based on vgg or pure vgg? we have conducted two baselines with vgg backbone. unfortunately, the trained models are saved in my server. when i return lab, i can share the pretrained weights and the model definitions with you.

Pure vgg, I want to compare training from ImageNet-pretrained and GCC-pretrained VGG-16.
Thanks a lot!
By the way, have you observed that GCC-pretrained network is better than ImageNet-pretrained?
I guess you compare these two in your paper.

@xialeiliu I am a co-author of this paper. The code you want is as follows

VGG :

class VGG(nn.Module):
    def __init__(self, pretrained=True):
        super(VGG, self).__init__()
        vgg = models.vgg16()
        if pretrained:
            vgg.load_state_dict(torch.load(model_path))
        features = list(vgg.features.children())
        self.features4 = nn.Sequential(*features[0:23])

        self.de_pred = nn.Sequential(
            Conv2d(512, 128, 1, same_padding=True, NL='relu'),
            Conv2d(128, 1, 1, same_padding=True, NL='relu')
        )

    def forward(self, x):
        x = self.features4(x)       
        x = self.de_pred(x)
        x = F.upsample(x,scale_factor=8)
        return x

VGG decoder:

class VGG_decoder(nn.Module):
    def __init__(self, pretrained=True):
        super(VGG_decoder, self).__init__()
        vgg = models.vgg16()
        if pretrained:
            vgg.load_state_dict(torch.load(model_path))
        features = list(vgg.features.children())
        self.features4 = nn.Sequential(*features[0:23])

        self.de_pred = nn.Sequential(
            Conv2d( 512, 128, 3, same_padding=True, NL='relu'),
            nn.ConvTranspose2d(128,64,4,stride=2,padding=1,output_padding=0,bias=True),
            nn.ReLU(),
            nn.ConvTranspose2d(64,32,4,stride=2,padding=1,output_padding=0,bias=True),
            nn.ReLU(),
            nn.ConvTranspose2d(32,16,4,stride=2,padding=1,output_padding=0,bias=True),
            nn.ReLU(),
            Conv2d(16, 1, 1, same_padding=True, NL='relu')
        )

    def forward(self, x):
        x = self.features4(x)       
        x = self.de_pred(x)
        return x

@gjy3035 will provide the pretrained vgg model parameters later, if you need.

@xialeiliu in our paper, we have compared the results using different installation. due to other stuff, our paper can not be polished. we will upload the paper in 5 days.
41a005c5-d589-470f-9f40-756c979054a2

That's exactly what I want to see, very interesting results. Thanks for posting it here!