dvlab-research / Parametric-Contrastive-Learning

Parametric Contrastive Learning (ICCV2021) & GPaCo (TPAMI 2023)

Home Page:https://arxiv.org/abs/2107.12028

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Questions about tau-norm with randaugment

adf1178 opened this issue · comments

Thanks for your exciting work!
Tau-norm with randaugment performs so well as shown in Table 3 and Table 5. I wonder about its implementation, just use augmentation_randncls as train_transform in training stage-1?

Hi,
Thank you for your questions!
For Tau-norm method, in training stage-1. we use standard augmentations following the original paper (https://github.com/facebookresearch/classifier-balancing) besides the randaugment. Just like that:
[ transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0),
randaugment,
transforms.ToTensor(),
normalize, ]

Thanks for your reply!
Now this i my implementation.
`
augmentation_randncls = [

        transforms.RandomResizedCrop(224, scale=(0.08, 1.)),
        transforms.RandomHorizontalFlip(),
        transforms.RandomApply([
            transforms.ColorJitter(0.4, 0.4, 0.4, 0.0)
        ], p=1.0),
        rand_augment_transform('rand-n{}-m{}-mstd0.5'.format(2, 10), ra_params),
        transforms.ToTensor(),
        normalize,
]
train_transform = transforms.Compose(augmentation_randncls)

`
Is that right?

Yes, it is.

Hi,
Thank you for your questions!
For Tau-norm method, in training stage-1. we use standard augmentations following the original paper (https://github.com/facebookresearch/classifier-balancing) besides the randaugment. Just like that:
[ transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.4, hue=0),
randaugment,
transforms.ToTensor(),
normalize, ]

Hi, I have another question. What are the implementation details for tau-norm, for example, the learning rate, batch size, and training epoch?

Hi,
We follow the tradition.

learning rate 0.05, batch size 128, training epoch 400.