kentsommer / keras-inceptionV4

Keras Implementation of Google's Inception-V4 Architecture (includes Keras compatible pre-trained weights)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

mean subtraction and rgb to bgr in preprocessing step in inception models

i-akbari opened this issue · comments

This is preprocessing function of your inception v4.

def preprocess_input(x):
    x = np.divide(x, 255.0)
    x = np.subtract(x, 0.5)
    x = np.multiply(x, 2.0)
    return x

why It is totally different from other models preprocessing?
1. Why there is no mean subtraction?
2. Why there is no RGB to BGR? Instead you used RGB
3. Mapping between [-1,1] or [-x, +x] is normal for all inception models?

This is preprocessing function of VGG and ResNet in Keras:

def preprocess_input(x, data_format=None):
    if data_format is None:
        data_format = K.image_data_format()
    assert data_format in {'channels_last', 'channels_first'}

    if data_format == 'channels_first':
        # 'RGB'->'BGR'
        x = x[:, ::-1, :, :]
        # Zero-center by mean pixel

        x[:, 0, :, :] -= 103.939
        x[:, 1, :, :] -= 116.779
        x[:, 2, :, :] -= 123.68
    else:
        # 'RGB'->'BGR'
        x = x[:, :, :, ::-1]
        # Zero-center by mean pixel
        x[:, :, :, 0] -= 103.939
        x[:, :, :, 1] -= 116.779
        x[:, :, :, 2] -= 123.68
    return x

Also Caffe models use mean subtraction and RGB to BGR.

Hi @i-akbari

You are comparing apples to oranges.

I would encourage you to read Google's implementation of the preprocessing function for the Inception architectures: https://github.com/tensorflow/models/blob/master/slim/preprocessing/inception_preprocessing.py#L237-L275

  1. The Inception architecture does not require it, therefore it is not done.
  2. The original weights trained in-house at Google used an RGB scheme. If you want to retrain the network from scratch, using RGB or BGR are both acceptable options. Consistency in choice between training and testing is all that matters.
  3. Yes.