thomasfermi / Algorithms-for-Automated-Driving

Each chapter of this (mini-)book guides you in programming one important software component for automated driving.

Home Page:https://thomasfermi.github.io/Algorithms-for-Automated-Driving/Introduction/intro.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Improve camera_calibrator.py

thomasfermi opened this issue · comments

The CameraCalibrator shall get doc strings and #TODO comments to help the student with the implementation.

We should add functions

  • show_vanishing_point(self, image, mpl_axis) which determines the vp from the image and then writes a plot to the mpl_axis object. This way we can reduce the boiler plate code in the book chapter itself
  • The current function get_vanishing_point shall be renamed to get_intersection. It should include a check for m1 == m2 to avoid division by zero. There shall be a new function get_vanishing_point(self, image) that returns u_i, v_i
  • The function get_py_from_vp(self, u_i, v_i, K) can live without the "K" argument. It can just use K = self.ld.cg.intrinsic_matrix or even directly Kinv = self.ld.cg.inverse_intrinsic_matrix.

EDIT: Regarding the last point. Maybe giving K as an argument is not such a bad idea after all... Now that the CameraCalibrator is added I feel that the design of the relations between CameraGeometry, LaneDetector, and CameraCalibrator are not that nice. Maybe the LaneDetector should not have a reference to the CameraGeometry, but rather have it passed as a function argument when needed. I will think about this a bit....

Ok, I though about this for a while an I came up with another design.

The CameraGeometry and LaneDetector classes from the previous exercises stay as they are. In this chapter we implement a class CalibratedLaneDetector which inherits from LaneDetector. I will develop a suggestion for that. Then it can be discussed.

I came up with this initial design: CalibratedLaneDetector

Maybe this can replace the CameraCalibrator. This is just the solution code and we would need to think about what will be implemented by the students.

There is also a minimal test notebook but here there is quite some work still needed. Probably the test notebook should loop over the images in a video and feed them to the CalibratedLaneDetector. This should then determine pitch and yaw and average them over time.

@MankaranSingh What do you think of this approach?

Update:

  • Added code to CalibratedLaneDetector such that calibration is only performed when straight lines are a good fit. This way the calibration will not run for the part of the video where we drive the curve.
  • Added code in the test notebook that iterates over the video. If this imageio based code shall remain, the environment.yml needs to add pip install imageio-ffmpeg

the residual method looks good! we would need to add this method in hints section. currently, it only contains curvature based method. what do you think, should we remove the curvature method from chapter or add it's code also in the solution code ?

and also, for video reading why not use cv2.videoCapture ? it would save us from adding new dependency.

Hi @MankaranSingh ,

what do you think, should we remove the curvature method from chapter or add it's code also in the solution code ?

I think I would remove it for now.

and also, for video reading why not use cv2.videoCapture ? it would save us from adding new dependency.

You are right, I was not using opencv because it was a bit cumbersome, but I finally got it running now. I changed the test notebook

Remaining work to close the issue:

  • remove code/solutions/camera_calibration/camera_calibrator.py since calibrated_lane_detector.py contains all the stuff that is needed
  • Create code/exercises/camera_calibration/calibrated_lane_detector.py
  • Change the book chapter in a way that it uses code/solutions/camera_calibration/camera_calibrator.py