yonghenglh6 / DepthwiseConvolution

A personal depthwise convolution layer implementation on caffe by liuhao.(only GPU)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About group number error

rickchen147258 opened this issue · comments

Thanks for your job!
But I found that if (output channels / groups) != 1 and equals to other integer, the net couldn't work.
e.g. If input channels = 32, groups = 32 and output channels = 64, the loss of net will not decrease. In Imagenet1000 training, the loss is still at 6.9.
Do you know how to solve this problem?
Thx!

The operation which changes the output channel nums as you descript is not the classic depthwise convolution in mobilenet paper.
Sorry, my implemence does not suppport it.

THx!
I do this because in the official caffe conv_layer.hpp says that output channels / groups can be bigger than 1.
like this
group (\b optional, default 1). The number of filter groups. Group

  • convolution is a method for reducing parameterization by selectively
  • connecting input and output channels. The input and output channel dimensions must be divisible
  • by the number of groups. For group @f$ \geq 1 @f$, the
  • convolutional filters' input and output channels are separated s.t. each
  • group takes 1 / group of the input channels and makes 1 / group of the
  • output channels. Concretely 4 input channels, 8 output channels, and
  • 2 groups separate input channels 1-2 and output channels 1-4 into the
  • first group and input channels 3-4 and output channels 5-8 into the second
  • group.
    So I guess your depthwise convolution layer can do the same thing like official caffe conv layer.

Sorry, it is simplethat work like pooling.

Sorry, It is not functional as group is, it is just a simple implement which operate on same output and input.

Thx you for answering my question.
I will try to implement it with group funtion.