Matching Score of SIFT on HPatches differs from SuperPoint
mfaisal59 opened this issue · comments
I see, the relevant numbers are .288 vs .313. That's not part of the standard hpatches benchmark (which is patch-based), so we ran it independently with different settings, even though we both used OpenCV. We probably used different image sizes and keypoint numbers, and maybe even sampled the image pairs differently (I think we did first vs rest in every sequence; I'm not sure I'm reading the SP paper correctly but they might've done all possible pairs?).
We should've been more explicit here, but we ran this last-minute for the appendix. The point of this experiment was just to show how learned methods perform against hand-crafted methods w.r.t. the inlier threshold.