dvlab-research / Parametric-Contrastive-Learning

Parametric Contrastive Learning (ICCV2021) & GPaCo (TPAMI 2023)

Home Page:https://arxiv.org/abs/2107.12028

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About learnable centers in Paco-loss

butcher1226 opened this issue · comments

In your paper, you mentioned that when calculating Paco-loss for sample xi, the learnable centers cj,j=1...m, are also included as positive/negative samples, besides, the centers seen as positive samples share a different weight compared to other positive samples which are data samples not centers.
However, I checked the codes of GPaco and Paco, finding no use of centers in PacoLoss.
When reproducing your work, I find taking centers into consideration even badly hurts the model performance.
Could you tell the reason? I am quite bothered by this issue.

Hi,

Thanks for your interest in our work.
The learnable centers are just the learnable weights of the linear layer.
https://github.com/dvlab-research/Parametric-Contrastive-Learning/blob/main/GPaCo/LT/losses.py#28, the sup_logits are the dot products of sample features and the learnable centers.

Welcome to discuss questions about work.

Thanks for your reply!
I find there is a small error in 'MOCO/builder.py', when initialized momentum queue, the code is 'nn.functional.normalize(self.queue, dim=0)', however the dim should = 1. In 'Moco' code, dim=0 for queue.shape=[feat_dim, K], in your code, queue.shape=[K, feat_dim], so dim should =1, but I think it doesn't matter actually:)