Shaggyshak / CS543_project_Image-based-Localization-of-Bridge-Defects-with-AR-Visualization

Visual inspection of bridges is customarily used to identify and evaluate faults. However, current procedures followed by human inspectors demand long inspection times to examine large and difficult to access bridges. To address these limitations, we investigate a computer vision‐based approach that employs SIFT keypoint matching on collected images of defects against a pre-existing reconstructed 3D point cloud of the bridge. We also investigate methods of reducing computation time with ML-based and conventional CV methods of segmentation to eliminate redundant keypoints. Our project successfully localizes the defect images and achieves a savings in runtime from filtering keypoints.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The purpose of the project is to identitfy the 3d coordinates of small test image to the large reference 3d reconstructed cloud point. procedure

  1. VisualSFM and SiftGPU are implemented to run sfm with bundle adjustment to recover sparse cloud point, then CMVS method is used to generate dense cloud point (onlly useful for 3d visualization) The sparse cloud point data is shown in the file bundle.rd.out, but I seperate it into two parts one for camera information: a. LiuHong_Camera_index_R_T format: <num_cameras> <num_points> [two integers]
...

=================================************ Camera geometry have the following format: [the focal length, followed by two radial distortion coefficients] [a 3x3 matrix representing the camera rotation] [a 3-vector describing the camera translation] b. LiuHong_Point_Collec ...

=================================************ Each point entry has the form:

[a 3-vector describing the 3D position of the point] [a 3-vector describing the RGB color of the point] I think it is RGB instead of BGR ^^ [a list of views the point is visible in] =================================************* begin with the length of the list(i.e the number of cameras the point is visible in). The list is then given as a list of quadruplets , where is a camera index, is the index of the sift keypoint where the point was detected in that camera (for this project, we are not going to use this parameter), and , are the detected positions of that keypoint. Both indices are 0-based. ****important: The pixel positions are floating point numbers in a coordinate system where the origin is the center of the image, the x-axis increase to the right, and the y-axis increases towards the top of the image. Thus, (-w/2,-h/2) is the lower-left corner of the image, and (w/2,h/2) is the top-right corner (where w and h are the width and height of the image) The dense cloud point is too big to uploaded, I named it as point-3D.ply. Basically, it is mesh file, it can be opened by meshlab. If you want to use: https://drive.google.com/file/d/1e4iw6MDxaOPivZwCI0aAnaXsN2RTK5fc/view?usp=sharing Otherwise you can use do CMVS dense reconstruction by using VisualSFM 2. The main file is draw_on_3d.py. First it calls the test image and run opencv sift detection for all reference image. Visualize the camera location and oriantation: Camera Visualize the sparse 3d reconstrution: Sparse_re Visualize the dense 3d reconstruction: Dense_Re All blue points mean camera position

The test image used is: test2

  1. In order to imporve Keypoint matching efficiency, it is a good idea to reduce the reference image as much as possible. Region of Interest (ROI) is processed to calculate the bounding box of the bridge by using bothe CNN based segmentation and conventional methods of detection. CNN-based approach used 15 images as 10 for training and 5 for testing. VGGNet is adjusted only with fully connected layers at the end with convolutional layers and added a softmax layer to obtain probability maps for the background and the bridge. The network was trained for 200 iterations with a batch size of 16 using SGD of learning rate 0.0001 and momentum 0.9

ROI: ROI cropping

Line detection: Line_detection

CNN: CNN

FLANN matching implemented between test image and each reference image with distance filter 0.7. The matching score are calculated based on the number of matching points, only top 5 matching score reference images are chosen to run following part. matching_score

  1. Remove the 2d outliers from top 5 reference image by calculating mean and std of 2d distance and only using point whose distance is within 1 std. Visualize the matching result. matching_result

  2. Run region crop to top 5 reference image: region

  3. Load projection matrix for those 5 images.

  4. Run sift detection and FLANN matching amoung those 5 images to match the feature.

  5. Use triangulation method to 3d reconstruct the matching point amount 5 images and apply outlier filter again to filter out the outlier from 3d points.

  6. Compute mean and std to decide the 3d cube location and width, finall visualize them with dense reconstruction final The red cube indicate the location of test image

About

Visual inspection of bridges is customarily used to identify and evaluate faults. However, current procedures followed by human inspectors demand long inspection times to examine large and difficult to access bridges. To address these limitations, we investigate a computer vision‐based approach that employs SIFT keypoint matching on collected images of defects against a pre-existing reconstructed 3D point cloud of the bridge. We also investigate methods of reducing computation time with ML-based and conventional CV methods of segmentation to eliminate redundant keypoints. Our project successfully localizes the defect images and achieves a savings in runtime from filtering keypoints.


Languages

Language:Jupyter Notebook 99.3%Language:Python 0.7%