Bebop_Stabilisation
This repository is the working space of two project groups (Drone1 and Drone2) in the context of Robotic and Embedded Project (PRJREB), INSA Lyon.
Each group has a working directory (Drone1/ and Drone2/), shared ressources areto be found in the main directory.
Installation Bebob
SIFT
SURF
Detection de contours
General bebop
Topic bebop
Build OpenCV:
cmake -D CMAKE_BUILD_TYPE=RELEASE
-D OPENCV_ENABLE_NONFREE=ON
ROS commands
Running the driver as a Node
$ roslaunch bebop_driver bebop_node.launch
Takeoff
$ rostopic pub --once [namespace]/takeoff std_msgs/Empty
Land
$ rostopic pub --once [namespace]/land std_msgs/Empty
Emergency
$ rostopic pub --once [namespace]/reset std_msgs/Empty
Pilot
linear.x (+) Translate forward
(-) Translate backward
linear.y (+) Translate to left
(-) Translate to right
linear.z (+) Ascend
(-) Descend
angular.z (+) Rotate counter clockwise
(-) Rotate clockwise
Video processing
(from https://www.learnopencv.com/video-stabilization-using-point-feature-matching-in-opencv/)
Import video, read frames
Video stream is published on image_raw
topic as sensor_msgs/Image
messages. (640 x 368 @ 30 Hz) (doc)
We receive one image after another, so we'll need to store at least one opencv image in a 'buffer'.
We need to convert this 'ROS styled' images flow to OpenCV format. We'll use ROS' package vision_opencv.
Using vision_opencv
Tutorial on converting ROS images to OpenCV format (Python).
from cv_bridge import CvBridge
bridge = CvBridge()
cv_image = self.bridge.imgmsg_to_cv2(image_message, desired_encoding='passthrough')
We will need to pay attention to the encoding that we want, most common encoding for opencv is bgr8
.
Convert to greyscales
Because we don't really need colors
prev_gray = cv2.cvtColor(prev, cv2.COLOR_BGR2GRAY)
Detect feature points
This method retrieve interesting points to track in the image, ie corners, straight lines, etc. You can change the number of points wanted, minimum space between them, etc.
- cv2.goodFeaturesToTrack() (doc))
Calculate optical flow
This method try to track each given feature in the next frame, you have to clean the results because errors may appear.
- cv2.calcOpticalFlowPyrLK() (doc)
Estimate Motion
This methods uses the points of interest from the first frame, and the linked points of interest from the second frame to compute the transformation matrix.
ATTENTION : this method is deprecated since OpenCV-4, see cv2.estimateAffine2D() and cv2.estimateAffinePartial2D() instead.
- cv2.estimateRigidTransform() (doc)