YuehuaZhu / ProxyGML

Official PyTorch Implementation of ProxyGML Loss for Deep Metric Learning, NeurIPS 2020 (spotlight)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

A small question about the code for training on SOP

Dyfine opened this issue · comments

Hi, thanks very much for sharing the code. When I use it to train models on SOP dataset, I get unexpected low results. I check the code and find in ProxyGML/loss/ProxyGML.py that one line (line36) has been commented in the function below.

def scale_mask_softmax(self,tensor,mask,softmax_dim,scale=1.0):
#scale = 1.0 if self.opt.dataset != "online_products" else 20.0
scale_mask_exp_tensor = torch.exp(tensor* scale) * mask.detach()
scale_mask_softmax_tensor = scale_mask_exp_tensor / (1e-8 + torch.sum(scale_mask_exp_tensor, dim=softmax_dim)).unsqueeze(softmax_dim)
return scale_mask_softmax_tensor

I uncomment this line and get the expected results, i.e., 78.0 R@1. So I think you may comment this line by mistake?

Thank you for your attention and you are right.