rabbityl / lepard

[CVPR 2022, Oral] Learning Partial point cloud matching in Rigid and Deformable scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RRE, RTE and RR

DavidBoja opened this issue · comments

Hi, thank you for the code.

I was wondering what are the RRE and RTE thresholds for Table 4. from the paper?
Also, are the RRE and RTE averaged over only successfull pairs?

Also, how do you compute the RR(%) in Table 4? -- a pair is succsessful if the RRE and RTE are below the thresholds?

Thank you in advance,
David

Also, are the RRE and RTE averaged over only successfull pairs?

  • Yes

compute the RR(%) in Table 4?

Thanks for the reply.
So to compute the recall, you say that the registration is successful if the registration has an error less than 0.04 (0.2^2)?
The RRE and RTE measures are then computed and averaged over the successful examples?

I am asking because this differs from the standard way the RRE and RTE are used with the RR measure -- usually a registration is deemed successful if (RRE < rre_threshold) and (RTE < rte_threshold).

commented

The RR measurement you mentioned is no good because it need to balance the influence of totation and translation.
The RR in our paper directly reflect the end-to-end registration effect on the points. Existing works. e.g. predator, also regard this as the ultimate metric.

You still use a threshold to determine if a registration is successful; the difference is that your way thresholds the distance error between point clouds, and using RRE and RTE thresholds the rotation and translation errors.

I agree however that the disadvantage in using RRE and RTE for RR is that it needs to balance the influecnece of rotation and translation. Hovewer, the advantage is that you separate the errors more finely (instead of one distance metric, you separate the source of you errors into two properties).

Also, I believe the metrics are answering slightely different questions:

  1. The first one (using distance threshold) anwers the question -- How many registration examples have, on average, points from the source point cloud closer than a threshold distance to the target point cloud
  2. The second one (thresholding RRE and RTE) answers the question -- How many registration examples have the rotation and translation errors smaller than a threshold

so, depending on the case, one might be more intuitive than the other. Many paper regard their metric as the ultimate one; both of the metrics described above are used in top conference papers.

commented

The first one is optimal because it is the general metric for point cloud registration. It handles all cases. E.g. if scale change occurs between two point clouds, the first metric does not need to define an extra threshold for scale difference. It also scales well to more complex transformations: e.g. with multi-body movements or non-rigid deformation, which the second one can not handle at all.

In addition, even in the rigid case, the second metric still has a fatal flaw: in the case of very large point clouds, e.g. captured with a lidar mounted on a running vehicle, a small rotational change could result in a very large end-point error on the point, i.e. the threshold on rotation can not reflect how well the point clouds are aligned together.