A Brief Overview of our approach could be found in this video - VIDEO SUBMISSION
The problem of pick and place has been one of the actively studied area and a canonical problem in robotics. The Amazon Robotics Challenge (ARC) has a rich tradition for the fabrication of highly robust and competitive warehouse robots that do classify and segregate objects apart from just pick and place. The advent of Deep Reinforcement Learning as a reliable alternate for learning robot controllers has greatly increased the dexterity and robustness of these arms. The given problem statement of Flipkart Grid 2.0, is unique, unparalleled, challenging, and demands a great amount of customization and design improvements in terms of both hardware and software. The enormous dimension of the arena and the relatively heavier payload eliminates the possibility of using any pre-existing methodologies. Also fabricating a robot from scratch at the given price point makes the challenge even the more exciting. Thus, we are sharing a solution for the above task, with all our experiments and results which according to the best of our knowledge the most cost-efficient, simplistic yet robust approach
- Our robot is greatly inspired by cartman, owing to its cost-efficient cartesian design which could cover the entire work area in a stable fashion.
- The generic 6 DOF robot arm, requires high torque motors at every joint to support the payload at the end effector whose costs are around INR 10,000 per unit. However, the torque to be applied per joint is drastically decreased due to our design and thus we are unaffected by the above limitation.
For a more detailed explanation of our work, check out our Phase 2 Report submission - Report.pdf
-
Having validated our solution in Pybullet Simulator, we are now moving on to build a real world prototype that closely resembles our idea within the given budget of INR 50,000
-
We are actively working in addressing the problems like simulation to reality transfer of our approach and customization of the pipeline for our pipeline for the fabricated hardware.
1. Any contributions/ suggestions are most welcome. Do contact the contributors with your valuable queries.
2. For clarifications about running our code, feel free to contact us.
- Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching
- Robotic Grasping of Novel Objects using Vision
- Deep Reinforcement Learning for Vision-Based Robotic Grasping: A Simulated Comparative Evaluation of Off-Policy Methods
- Vision-based Robotic Grasp Detection From Object Localization, Object Pose Estimation To Grasp Estimation: A Review
- Robotic Grasping in Cluttered Environments-Stanford Videos
- Analysis and Observations from the First Amazon Picking Challenge
- Team Delft’s Robot Winner of the Amazon Picking Challenge 2016 - Their implementation GitHub
- AN ANALYTICAL METHOD TO FIND WORKSPACE OF A ROBOTIC MANIPULATOR
- How I won the Flipkart ML challenge
- Domain Randomization for Sim2Real Transfer
- Amazon picking challenge - Cartman - Robot using 3D printer system
- Amazon picking challenge - MIT and Princeton
- Grasp Prediction on RGB-D images [Paper] [Code]
- High Quality Monocular Depth Estimation via Transfer Learning [Paper] [Code]
- RefineNet for object segmentation
- Light weight RefineNet for object segmentation from RGB-D images
- Training COCO dataset to master object segmentation - MEDIUM
- DenseFusion: 6D Object Pose Estimation by Iterative Dense Fusion - Implementation GitHub
- Using Geometry to Detect Grasp Poses in 3D Point Clouds
- Using Geometry to Detect Grasping Points on 3D Unknown Point Cloud
- Vision-based Robotic Grasp Detection From Object Localization, Object Pose Estimation To Grasp Estimation: A Review - Survey Paper
- Efficient Grasping from RGBD Images: Learning using a new Rectangle Representation