arunavanag591 / Thesis-Gists

A Vision-Based Odometry Model for Adaptive Human-Robot Systems

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Thesis-Gists

A Vision-Based Odometry Model for Adaptive Human-Robot Systems

Advances in automation, particularly those in industrial robotics, have brought significant production efficiencies to mid and large-scale manufacturers across the globe. Speed, advanced sensing and tighter integration with manufacturing execution systems have brought many benefits to modern production systems. Advances aside, industrial robotics still lack many safety attributes that do not allow them to operate unguarded and under conditions where true human-robot interaction can exist. Compounding the issue, programming methods and the overhead related to testing and validation run counter to real-time, rapid response ideologies. The dawn of easy-to-program, safe and cost efficient automation now drives the development of the next-generation of industrial robotics; systems that use smart sensing technologies, including force and torque monitoring that support “safe", incidental human operator contact. Now, because of advances on this front, it is possible for human and machine to work in concert with one another. Human-robot collaboration is “human-in-the-loop" automation; automation that simplifies tasks that would be too cumbersome and expensive to implement with rigid, engineered solutions. In this thesis a step towards “human-in-the-loop" automation has been taken, where an object is placed within the robot’s workspace and the robot is able to interact with the object without any pre-training in real time. The collaborative robot has been guided with a very cheap 3D vision solution which is generally not seen in the industry. The vision guidance helps to simplify the manufacturing assembly tasks using commercial hardware. Real-time streaming point clouds have been taken as input and compared with mesh CAD data using correspondence grouping algorithm to performa fast as well as robust object recognition solution has been demonstrated. Also using 3D geometry the position of the object is calculated and using the position of the object an optimized trajectory for the robot arms to reach the object has been planned. A real-time vision-guided human-robot collaborative system is discussed and also path planning optimization has been analyzed in this thesis. The case study demonstrates usage of Robot Operating System (ROS) and implementation of the entire recognition-manipulation phase in real time. It introduces the adaptive nature of the system for both safe product handling and safe H-R collaboration in real-time.

Link: http://www.lib.ncsu.edu/resolver/1840.16/11399

Video Demo: https://youtu.be/HMF_wyXZJ8M

Motivation and Early work: https://youtu.be/f0LnZkrKQsY

About

A Vision-Based Odometry Model for Adaptive Human-Robot Systems


Languages

Language:C++ 100.0%