YantaoShen / openBCT

Pytorch code for Towards Backward-Compatible Representation Learning [CVPR 2020 Oral]

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is it typo that the metric of IJB-C 1:N is TNIR(%)@FPIR=10^-2 in the paper?

pansanity666 opened this issue · comments

Hi, it's me again LOL, sorry for so many questions.

In the paper you said that the metric for IJB-C 1:N is TNIR(%)@ FPIR. But to the best of my knowledge, TNIR is equal to 1-FPIR.
Maybe you mean TPIR(%)@ FPIR=10^-2, which is also known as DET(Decision Error Trade-off) curve, as in this paper ?

By the way, for the open-set test protocol, should I calculate the metric for probe-G1 and probe-G2, separately, and then average them?

Hi, it's me again LOL, sorry for so many questions.

In the paper you said that the metric for IJB-C 1:N is TNIR(%)@ FPIR. But to the best of my knowledge, TNIR is equal to 1-FPIR.
Maybe you mean TPIR(%)@ FPIR=10^-2, which is also known as DET(Decision Error Trade-off) curve, as in this paper ?

By the way, for the open-set test protocol, should I calculate the metric for probe-G1 and probe-G2, separately, and then average them?

I am deeply sorry for this typo! I will revise it right now (only the table caption is wrong, the experiment metric setting introduction part (Sec 3.1) is right).

It should be TPIR@FPIR as the standard testing protocol. Thanks for spotting the typo.

The openset TPIR@FPIR values are calcuated on each FPIR level for the two splits and averaged to the value presented in the paper.

An example to understand this

FPIR TPIR (G1) TPIR (G2) TPIR
10-02 value_a value_b (value_a + value_b) / 2
10-03 value_c value_d (value_c + value_d) / 2

Thank you very much~