dkogan / mrcal

Next-generation camera-modeling toolkit

Home Page:http://mrcal.secretsauce.net

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Have you tried using the star-shaped feature points detector ?

VasilevIvanVladimirovich opened this issue · comments

I really liked your work and your contribution to the calibration of cameras.

I would like to ask about images with incomplete detection, in the documentation you referred to the works of "Thomas Sheps - Why having 10,000 parameters in your camera model is better than twelve", have you tried using their detector to determine corners ?

image

Hi. I have not tried it, since it was probably a lot of work to run that experiment. Is the software available? Can it be built and used without endless pain? If you try it out, let me know what you find!

I'm aware of one chessboard detector that's probably better than mrgingham: boofcv. I performed only very limited testing, so maybe I'm wrong. It's a java thing, so I didn't end up using it.

Hi, @dkogan
Thanks for the boofcv detector, I had never heard of it before, thanks to you I learned more.

I used the star detector from here and added some code to convert corners to .vln format.

During the experiment, I took 100 shots of the star template, performed the detection with the Star Detector, and converted the result to .vnl format.
Show corners

For calibration I used the command:
mrcal-calibrate-cameras --corners-cache MainCorner.vnl --lensmodel LENSMODEL_SPLINED_STEREOGRAPHIC_order=3_Nx=30_Ny=20_fov_x_deg=150 --focal 1250 --object-spacing 0.012 --object-width-n 17 --object-height-n 24 './frames/*.png'

The program gave the following error:
terminal1

I assumed that the problem may lie in the small number of corners found in the image, and therefore I selected the 5 most successful shots for verification and repeated the calibration.
The error is gone ! :)
Show Model
An example of an undistorted image of the resulting model:

undist

My question is, can the program give that error because of the large number of undetected corners ?

Hi. Thanks for doing the work to test this! I'm currently improving the documentation, and this data will help me do that. Thanks!

The error you're seeing (non-positive-definite JtJ matrix in CHOLMOD) usually results from some variable in the optimization having zero effect on the problem. When the solver sees that, it tries to correct by adding a bit of L2 regularization to pull everything towards 0, but it's not a "fix", and the root cause should be addressed. The tool needs to be clearer about what's happening, and I can sorta improve that.

Here's what's happening in your case:

  • The optimization runs to completion
  • The solver looks for outliers (simply defined as poorly-fitting observations for now), and throws them out. In your case, for some observation this threw away all or almost all of the observed points.
  • The optimization runs again, ignoring the outliers
  • Due to the outlier rejection, there's a chessboard observation in the solve with no points (all have been thrown out). So the variables that control where this chessboard is in space no longer have any effect on the solution, and you get your problem

A simple workaround is to disable the outlier rejection: mrcal-calibrate-cameras --skip-outlier-rejection ....

If I do that, the errors go away, but the resulting residuals are very poor: mrcal-show-residuals-board-observation \ --from-worst camera-0.cameramodel 0-5 pulls up the 6 worst-fitting images, and we see that the worst one is REALLY bad, but the rest are fine. This tells me that it's likely that this one image has some kind of problem. The bad image is frame_252.png. If we get rid of that image and re-solve, the errors disappear. So maybe the detections are out-of-order in that image (rotated or mirrored). Or maybe the seeding algorithm got confused about something. Can you please double-check the detections for that image? If you tell me they look OK, I'll look a bit deeper.

And two more notes:

  • The FOV in your splined model is far too high. It doesn't break anything, but it puts a lot of the splined surface beyond where your lens can see, which wastes that complexity. Try something like LENSMODEL_SPLINED_STEREOGRAPHIC_order=3_Nx=12_Ny=9_fov_x_deg=75
  • The MainCorner.vnl you sent isn't a .vnl: it's missing the header. Prepend # filename x y level to that data, and it becomes a vnl, and you can do stuff like
< MainCorner.vnl \
  vnl-filter -p filename --has x \
| vnl-uniq -c \
| vnl-filter -p count \
| feedgnuplot --histo 0 --binwidth 10

and

< MainCorner.vnl \
  vnl-filter -p filename --has x \
| vnl-uniq -c \
| vnl-sort -gk count

I looked deeper into this, and found a bug in my code. This caused issues in the initialization of incomplete chessboard views, which triggered the problem in this report. Fixed here: be69d9e. I'm about to commit another patch to improve diagnostic printing in such cases as well. I'm closing the report.

Hello @dkogan
Thank you for your answer to my question, you helped me a lot.
I am glad that I was able to help this project.

Detection on image frame_252.png ok.

SPziHGfbU1U

Hi. Yes the detection of frame 252 was OK. The problem was caused by the bug that is fixed in the git.

What's your experience with the detector? Did it work OK? Can you share the complete set of code you used to run it over your images?

Would you mind if I used your MainCorner.vnl data in the documentation for mrcal? It would be a good example of using other corner detectors with mrcal.

Hi, @dkogan
Of course, I have no objections, use the data.

At the moment I can't check how good this template is. I would like to check, but now I have other tasks. I have a robot manipulator that can help very well in collecting a sample of images, and I will definitely reach this stage soon.

My template is printed on plastic and is not perfectly smooth.
PDF: pattern_resolution_17x24_segments_16_apriltag_0 (1).pdf
Pattern Model: pattern.txt
(I couldn 't upload a format file here .yaml, so changed it to .txt, if you want to use this model inside the program, change the file format to .yaml)

I can't offer a simple way to work with this template.
To get the corners of the format .vnl I used additional code and a separate converter, if you need it, I can paint my algorithm.

To run the detector, I used a program from an open repository. To use it, you need to install all the dependencies suggested in the documentation.
To start the detector, I used the following command: ./camera_calibration —pattern_files PATH/pattern.yaml —image_directories PATH/imgDir —dataset_output_path PATH/features.bin —refinement_window_half_extent 20 —no_cuda_feature_detection

In the near future, I will rewrite the detector for my project and make it independent of the main program presented in the author's repository. When I manage to implement this, I would be happy to share it with you.