This repo aims at collect the metric used in popular dataset of image retrieval, helping beginner to master the essential knowledge in evaluating performance of image retrieval tasks, e.g. image retrieval, visual place recognition, metric learning.
Link to revisitop.
### Input:
# ranks : sorted list of retrieved results e.g.[0, 1, 2, 3, 5, 7, ...]
# nres : Number of positive images (Ground Truth)
### Return:
# ap: Average precision
"""
Example:
if k == 2: # ranks = [0, 1, 4]
AP_k = (2/4 + 3/5)/nres*2 = 0.18
elif k == 1:
AP_k = (1/1 + 1/1)/nres*2 = 0.33
elif k == 0:
AP_k = (1 + 1)/3*2 = 0.33
AP = 0.85
"""
For each query
where
where
Link to GLDv2. Submit to kaggle for evaluation.
where:
-
$Q$ is the number of query images -
$m_q$ is the number of index images containing a landmark in common with the query image$q$ (note that$m_q > 0$ ) -
$n_q$ is the number of predictions made by the system for query$q$ -
$P_q(k)$ is the precision at rank$k$ for the$q$ -th query -
$rel_q(k)$ denotes the relevance of prediciton$k$ for the$q$ -th query: it’s 1 if the$k$ -th prediction is correct, and 0 otherwise.
Link to SOP
Same with CUB, Car, INaturalist: