CompVis / metric-learning-divide-and-conquer

Source code for the paper "Divide and Conquer the Embedding Space for Metric Learning", CVPR 2019

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

is the algorithm of calculating recall@k metrics correct?

JasonKll opened this issue · comments

def calc_recall_at_k(T, Y, k):

Hi,
is the code above, which calculates recal@k metrics correct? it looks like top-k accuracy to me (when we add 1 to result sum, if we find at least one image in retrieval set which is from the same class as query image).
And Recall@k is =(# of recommended items @k that are relevant) / (total # of relevant items),
like in this article:
https://medium.com/@m_n_malaeb/recall-and-precision-at-k-for-recommender-systems-618483226c54

Thanks,
Jason

Hi,

I am wondering that the recall calculation seems to be incorrect as it only look for at-least one correct retrieval. The correct way to calculate recall can be found at the link below : https://github.com/littleredxh/DREML/blob/master/_code/Utils.py

Please see this part of a code:

for r in rank:
A = 0
for i in range(r):
imgPre = imgLab[idx[:,i]]
A += (imgPre==imgLab).float()
acc_list.append((torch.sum((A>0).float())/N).item())

So, we should compare predicted (imgPre) and True label(imgLab) for the retrieved images and divide it by total number of images (N) to calculate recall.