Skythianos / Manipulating-objects-using-keypoints

Finding objects in an arbitrary environment is one of the unsolved problems about robots operating in such environments, e.g. households. In this project a robotics application is presented. The software controlls a robotic arm, and estimates the spatial position and orientation of an object for which it has been trained previously. The estimation is done using images retrieved from a camera mounted on the end effector of the robot. The software uses PnP algorithm which estimates the spatial pose from object points with known 3D coordinates and the corresponding image points. The image points are found via SURF keypoint detector. During training the algorithm, 3D reconstruction is done via multi-view triangulation using multiple images taken from known positions.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Manipulating-objects-using-keypoints

Finding objects in an arbitrary environment is one of the unsolved problems about robots operating in such environments, e.g. households. In this project a robotics application is presented. The software controlls a robotic arm, and estimates the spatial position and orientation of an object for which it has been trained previously. The estimation is done using images retrieved from a camera mounted on the end effector of the robot. The software uses PnP algorithm which estimates the spatial pose from object points with known 3D coordinates and the corresponding image points. The image points are found via SURF keypoint detector. During training the algorithm, 3D reconstruction is done via multi-view triangulation using multiple images taken from known positions.

About

Finding objects in an arbitrary environment is one of the unsolved problems about robots operating in such environments, e.g. households. In this project a robotics application is presented. The software controlls a robotic arm, and estimates the spatial position and orientation of an object for which it has been trained previously. The estimation is done using images retrieved from a camera mounted on the end effector of the robot. The software uses PnP algorithm which estimates the spatial pose from object points with known 3D coordinates and the corresponding image points. The image points are found via SURF keypoint detector. During training the algorithm, 3D reconstruction is done via multi-view triangulation using multiple images taken from known positions.


Languages

Language:Python 99.4%Language:xBase 0.6%