doughtmw / display-calibration-hololens

Point correspondence-based display calibration and ArUco marker tracking for the HoloLens 2.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to compute the registration accuracy?

nocardio opened this issue · comments

Hi @doughtmw,

Thank you very much for sharing this incredible repository.
Fyi, I tried to build and deploy hmd-scene to HoloLens 2 and it's working successfully.
The models were perfectly overlaid on the boards and target template.
However, when I tried to record/capture the scene via device portal or using recording button in HL2, I got a really big offset.

I'm wondering how did you manage to measure the registration accuracy as I will get a really big error by using the captured registration via device portal.
One more thing, when I test another target templates for hmd scene, HL2 always overlay the model w.r.t. the trace_target_04.pdf

Thank you very much and really appreciate your work! ^^

Hi @nocardio, I'm glad the sample is working well for you so far! There are some more details in my paper linked in the README, but part of the challenge of evaluating error with head-mounted displays is that virtual content can be perceived differently by each wearer of the headset. In my paper, I performed a user study where participants drew onto the trace template, which I then scanned and compared to the ground truth to estimate point and trace error.

With the Mixed Reality Capture API, there is significant offset introduced (usually in the vicinity of 1-2 cm) as the capture application creates a 2D image of the scene using the right eye camera of the 3D stereo view of the headset. If you want to record images/video of the tracking experience, I would recommend a webcam mounted behind the eye box of the headset. This is how I recorded the GIF used in the README file.

There should be a random trace target selected based on the Unity Engine RNG, it's possible that you could have ended up with the same selection multiple times in a row. Otherwise, the relevant method for RNG generation is below if you wanted to adapt.

public void SetGameObjectsFromRng()
{
// Set the board go to use from rng
// Create the RNG for selecting the current tracing target
// between [min] and [max] exclusive
int rng = UnityEngine.Random.Range(0, BoardGoListLeftEye.Count - 1);
BoardGoRightEye = BoardGoListRightEye.ElementAt(rng);
BoardGoLeftEye = BoardGoListLeftEye.ElementAt(rng);
}

Hi @doughtmw , Thanks a lot for your answers!
I read the paper linked and found more detail explanation there. I should have checked it earlier :)

I found this YouTube video that was also trying to record a better tracking experience in HoloLens 2 by mounting a webcam behind the right-eye lens. You mentioned that the capture application create a 2D image scene using the right eye camera. So, did you also perhaps mount the webcam behind the right-eye lens?

One more thing, I am wondering about the reason to make Right Eye as the main camera instead of the Left Eye, in the unity hierarchy. Usually, a left camera is the reference frame for stereo-view configuration and HoloLens2 research mode also put Left Front camera sensor as the rig-origin. I'm sorry that I got a little confused here. Please, correct me if I'm wrong.

Again, thanks a lot for your helps!

Awesome! Glad the paper was helpful. For the video I collected, the webcam was mounted behind the right eye camera. Also, in the Unity scene, both cameras are tagged as the Main Camera but each target a separate eye to render to (either the right or left eye). The naming convention of the right and left eye cameras as main and secondary in the Unity scene doesn't have any impact on the rendering of the scene.

You really save me! Thanks so much for your well explanation.