The goal of the project here is to track the buoys from the video
- ROS2 Humble
- numpy
- imutils
- Opencv-Python
- cv_bridge
Copy the contents in the resource folder here and copy them to resouce folder inside the buoy_tracker package.
Open a new terminal with ros sourced.
- mkdir -p tracker_ws/src
- cd tracker_ws/src/
- git clone https://github.com/saching13/buoy-tracker.git
- cd ../..
- rosdep install --from-paths src -y --ignore-src
- colcon build --symlink-install
open a new terminal with ros sourced
- cd tracker_ws
- source install/setup.bash
- ros2 launch buoy_tracker buoy_tracker.launch.py
Buoy tracker node detects the buoys from the video and tracks them accordingly. As default config it detets them every alternative frame only and tracks the other time.
Here are some of the configs.
video_file_path
-> path to the videoweights_file_path
-> Path to the weights used for GMM for detection of the buoydetect_freq
-> used to determine once in how many frames are detected. While rest are tracked. Default is 4.max_decay_count
-> After how many predictions without detections does the tracker should discard the tracked object. This should be atleast twice the detect_freq
- There will only 3 colored buoys
- Color information are not considered during the tracking to make the problem little more complicated to showcase the tracker and association algorithm.
- The detection algorithm is restructuede version of the algorithm in the reference mentioned below which is not optimized for speed at the moment. But there is space for improvement
For detection I am using the pre-trained Gaussian mixture model from this repository
Some of the improvements that I would like to add but left out for now due to being short on time.
- Include the radius in the tracker.
- Improve the tracking algorithm
- Improve the lazy loops in the detection algorithm.
- Use the segmented images for training
- Use a NN based segmentation algorithm instead of gauss mixture.
- Detection based on thresholding using RGB/BGR/HSV or LAB formats failed.
- In the implementation I am doing predict right after update. So that I can showcase where it is expecting the next position would be in the current image.
- So In the image green circle shows where the algo is expecting the next position would be. While blue shows the updated position when the last detetion happened. which is 2 or 4 frames behind depending on the settings.