xuelunshen / rfnet

RF-Net: An End-to-End Image Matching Network based on Receptive Field

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Matching problem

AlexeySrus opened this issue · comments

How I can match detectAndCompute inference results on two images?
I have this question because I try use default OpenCV matching approaches such as BFMatcher and FLANN with kNN method, but I got bad results.

Code:

base_kp1, base_des1 = feature_extractor.detectAndCompute(
            img1, None)
warp_kp2, warp_des2 = feature_extractor.detectAndCompute(img2, None)
bf = cv2.BFMatcher()
matches = bf.knnMatch(warp_des2, base_des1, k=2)

Note: detectAndCompute is changed for another input.

And I try use next approach:

def nearest_neighbor_match_score(des1, des2, kp1, kp2, COO_THRSH):
    des_dist_matrix = distance_matrix_vector(des1, des2)
    nn_value, nn_idx = des_dist_matrix.min(dim=-1)

    kp1w = kp1
    nn_kp2 = kp2.index_select(dim=0, index=nn_idx)

    coo_dist_matrix = pairwise_distances(
        kp1w[:, 1:3].float(), nn_kp2[:, 1:3].float()
    )

    return list(
        zip(
            *np.where(coo_dist_matrix.le(COO_THRSH).to('cpu').numpy() > 0)
        )
    )

Please share your matching code.

Hi, @AlexeySrus

I update the example.py with NNR match strategy.

You could run the code with the command as follow:

CUDA_VISIBLE_DEVICES=0 python example.py --imgpath ./material/img3.png@./material/img2.png --resume ../runs/10_24_09_25/model/e121_NN_0.480_NNT_0.655_NNDR_0.813_MeanMS_0.649.pth.tar

All matching strategies are wrote under eval_utils.py, and you could find them in there.

In fact, the RF-Net does not perform so well under scale transformation, but it performs better under viewpoint and illumination transformations.
This may be because it is only trained on a small data set (47 viewpoint sequences on HPatches that contains 235 image pairs).