| Ubuntu 1804 & ROS Melodic Morenia
|
This repo contains a eye-in-hand calibration tool (Cplusplus & ROS) in JDäŗ¬äø GRASPING ROBOT CHALLENGE (News),
and the implements of my paper: Robotic hand-eye calibration with depth camera: A sphere model approach (PDF)
Inside /src
there are 5 ROS packages:
-
rgbd_srv
used by camera_driver. -
camera_driver
drive IntelĀ® RealSenseā¢ RGBD cameras in ROS.
(convert raw stream to rossensor_msgs::Image
) -
camera_transform_publisher
publish the transformation matrix (extrinsics) between camera and a marker( chessboard | aruco tag). -
handeye_calib_marker
handeye calib tools that use RGB camera and a marker, modified from handeye_calib_camodocal
----------------------------------------------
with the package above you can calibrate the eye-in-hand transformation on RGB images. -
handeye_calib_sphere
fine-tune the eye-in-hand result based on depth.
- Ubunutu 18.04
- ROS Melodic Morenia (desktop-full install)
- with OpenCV & Opencv_contrib install in
/usr/local
(both3.4.0
) - librealsense if you use IntelĀ® RealSenseā¢ RGBD cameras (D400 series and the SR300)
- visp_hand2eye_calibration_calibrator
$ sudo apt-get install ros-melodic-visp-hand2eye-calibration
- glog at here (
0.4.0
) - Ceres-solver download at here (
1.14.0
) and instruction at here - Sophus at here (
1.0.0
) - ros-melodic-cv-bridge *
$ sudo apt-get install ros-melodic-cv-bridge
- Point Cloud Library (PCL)
$ sudo apt-get install libpcl-dev ros-melodic-pcl-ros
Download opencv and opencv_contrib (both tested on 3.4.0
).
Install opencv prerequisit:
[compiler] $ sudo apt-get install build-essential
[required] $ sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
[optional] $ sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
Build opencv & opencv_contrib from source: (we turned off the cuda options)
$ mkdir build && cd build
$ cmake \
-D CMAKE_BUILD_TYPE=Release \
-D OPENCV_EXTRA_MODULES_PATH= <path/to/opencv_contrib-3.4.0>/modules/ \
-D BUILD_opencv_cudacodec=OFF \
-D WITH_CUDA=OFF \
-D WITH_CUBLAS=OFF \
-D WITH_CUFFT=OFF \
-D ENABLE_PRECOMPILED_HEADERS=OFF \
-D CMAKE_INSTALL_PREFIX=/usr/local ..
$ make -j5
$ sudo make install
We use SR300/D415 camera and use camera_driver
package to convert raw RGBD images into ROS topic of sensor_msgs::Image
. The librealsense
is required only if you use cameras from RealSenseā¢ family, otherwise make your own camera_driver
.
Following the librealsense installation guide at here.
We installed: librealsense2-dkms
, librealsense2-utils
, librealsense2-dev
, librealsense2-dbg
If you are lucky enough, the ros pre-build ros-melodic-cv-bridge
works well.
However, since the default OpenCV in ros-melodic-cv-bridge
is 3.2
, while we use opencv & opencv_contrib in 3.4.0
, we recommend you to build the cv_bridge from source.
$ git clone https://github.com/ros-perception/vision_opencv.git
$ cd vision_opencv
$ git checkout melodic
$ cd cv_bridge
$ mkdir build && cd build
$ cmake ..
$ make -j5
$ sudo make install
In the meantime, remove the pre-bulid cv_bridge
in ros-melodic path /opt/ros/melodic
:
$ cd /opt/ros/melodic
# or you can archive them to other place
$ sudo rm -rf lib/libcv_bridge.so
$ sudo rm -rf include/cv_bridge
$ sudo rm -rf share/cv_bridge
For more information about cv_bridge
, see here.
$ git clone https://github.com/lixiny/Handeye-Calibration-ROS.git
$ cd Handeye-Calibration-ROS
$ catkin_make
if you only want to calibrate hand-eye transformation on RGB marker,
$ git clone https://github.com/lixiny/Handeye-Calibration-ROS.git
$ cd Handeye-Calibration-ROS
$ git checkout rgb_only
$ catkin_make
Prepare a camera support that mount camera on the robot. We provide the support files [.STL] for UR5 and RealSense D415 & SR300.
Navigate to the Handeye-Calibration-ROS/
folder, and source devel/setup.bash
in each new terminal.
$1 roslaunch camera_driver realsense_driver.launch
The camera intrinsic (later we call it intr
) will appear in current terminal, record it.
The camera_driver
publishes 3 topics to rosmaster:
- /realsense/rgb
- /realsense/depth
- /realsense/cloud
the point cloud is under tf frame:/camera_link
- /realsense/camera_info
the camera_info followed ros sensor_msgs/CameraInfo.msg, we only useK
andD
in it.
You can also visualize these topics in rviz
.
You can either use a chessboard or an aruco plane at doc/rawArucoPlane.jpg we designed.
After you placed the chessboard inside camera view, run:
$2 roslaunch camera_transform_publisher chessboard_publisher_realsense.launch
# or specify chessboard parameters:
$2 roslaunch camera_transform_publisher chessboard_publisher_realsense.launch \
chessboardWidth:=10 chessboardHeight:=7 squareSize:=0.02
[IMPORTANT] There are 5 user-specified parameters in chessboard_publisher_realsense.launch
:
- chessboardWidth -- # of inner corners on chessboard width direction
- chessboardHeight -- # of inner corners on chessboard height direction
- squareSize -- length of side of each square on chessboard, in meter.
- cameraTopic -- the rgb topic name published in your camera_driver, in ours:
/realsense/rgb
- cameraInfoTopic -- the camera_info topic name published in your camera_driver, (
/realsense/camera_info
)
Make sure you have already checked them based on your chessboard.
After you placed our aruco plane inside camera view, run:
$2 roslaunch camera_transform_publisher aruco_publisher_realsense.launch
# or specify aruco plane parameters:
$2 roslaunch camera_transform_publisher aruco_publisher_realsense.launch \
tagSideLen:=0.035 planeSideLen:=0.25
[IMPORTANT] There are 4 user-specified parameters in aruco_publisher_realsense.launch
:
- tagSideLen -- side length of a single aruco tag, in meter. (see illustration)
- planeSideLen -- side length of the whole aruco plane, in meter.(see illustration)
- cameraTopic -- the rgb topic name published in your camera_driver, in ours:
/realsense/rgb
- cameraInfoTopic -- the camera_info topic name published in your camera_driver, (
/realsense/camera_info
)
Make sure you have already checked them based on the physical length of your aruco plane.
Now in the pop-up window, you will see an AR cube like this:
This camera_transform_publisher
will publish a tf (/camera_link
, /ar_marker
) to ros master. If no marker found, a Identity SE3 tf will be used. You can also visualize the two frame in rviz.
Followed the instrction #3. Bringup UR in doc/install_ur.md.
$3 roslaunch handeye_calib_marker handeye_online.launch
[IMPORTANT] There are 4 user-specified parameters in handeye_online.launch
:
- ARTagTF -- the name of marker frame (defined in
camera_transform_publisher
,/ar_marker
) - cameraTF -- the name of camera frame (defined in
camera_transform_publisher
,/camera_link
) - EETF -- the name of End Effector frame (defined by UR,
/ee_link
) - baseTF -- the name of robot base frame (defined by UR,
/base_link
)
Check these 4 tf names before you launch calibraion!
Repeatedly move UR end effector to different configuration. Meanwhile, make sure at each unique configuration, a valid marker can be detected. In current terminal, press s
to record one AX=XB
equation. After sufficient # of equations have been recorded (30+), press q
to perform calibraition. Then in current terminal, you will see some output like:
modify the file: handeye_calib_marker/launch/show_result.launch
,
replace the ${Translation: x y z}
and ${Rotation: qx qy qz qw}
based on your terminal output.
<launch>
<node pkg="tf"
type="static_transform_publisher"
name="realsense_link"
args="${Translation: x y z}
${Rotation: qx qy qz qw}
/ee_link
/camera_link
100"
/>
</launch>
and launch:
$4 roslaunch handeye_calib_marker show_result.launch
or, in a new terminal:
# rosrun tf static_transform_publisher \
# x y z qx qy qz qw \
# frame_id child_frame_id period_in_ms
$4 rosrun tf static_transform_publisher ļ¼¼
0.0263365 0.0389317 0.0792014 -0.495516 0.507599 -0.496076 0.500715 \
/ee_link /camera_link 100
There are several representations/expressions of the HAND-EYE problem:
- HAND-EYE is the transformation from
/ee_link
to/camera_link
- or tf: (
/ee_link
,/camera_link
) - or father:
/ee_link
, child:/camera_link
- or: Pee_link = Thandeye * Pcamera_link
NOTE: now you should have a ready-to-use handeye transformation.
This results is optimized only based on RGB images. You can still fine-tune the result with depth image (if has) following setp # 6.
// TODO
Yang, Lixin, et al. " Robotic hand-eye calibration with depth camera: A sphere model approach. " 2018 4th International Conference on Control, Automation and Robotics (ICCAR). IEEE, 2018. PDF
@inproceedings{yang2018robotic,
title={Robotic hand-eye calibration with depth camera: A sphere model approach},
author={Yang, Lixin and Cao, Qixin and Lin, Minjie and Zhang, Haoruo and Ma, Zhuoming},
booktitle={2018 4th International Conference on Control, Automation and Robotics (ICCAR)},
pages={104--110},
year={2018},
organization={IEEE}
}
This repo is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the license for further details