Vrep Grasp&Insert Code Base
Code Overall Idea:
- To grasp a target object with as little post-grasp displacement as possible
- Match the target object and the hole to see if the object is insertable into the hole
- Insert
Key features:
-
Minimal Displacement Grasping:
- Visual affordance network (img input --> CNN --> heatmap to represent grasp primitive)
- Self-supervised labeling (grasp score = f(dispacement), detailed equation see get_grasp_label_value in grasp_trainer.py)
-
Matching:
- Two binary img patch to represent the target object and hole (Value '1' is used to denote the pixels belongs to the target object or the hole part).
- The mature algorithm for now is iterative binary search.
-
Insert:
- Because of the grasping displacement, hope to use SAC to compensate it.
Requirements:
- numpy, Pytorch 1.5.1
- Coppeliasim (Previous Name: V_rep)
- matplotlib
To Start Simulation Experiment:
- Run Vrep
- Open scene: "simulation/simulation.ttt"
- To train the minimal grasping policy, run main_grasp_training.py To train the SAC insertion policy, run main_insert_trask.py