Gripper control points
jucamohedano opened this issue · comments
Hello. I am an undergraduate student currently using contact-graspnet on a project.
I am using contact-graspnet on a different robot (TIAGo from pal robotics) and the gripper differs from the panda robot. In order to be safe, I move the generated grasps backwards by an offset and then perform a forward motion to get the object between the robot's fingers. You have trained the network with the panda's gripper configuration by using its STL model and also points on the gripper (contact_graspnet/gripper_control_points/panda_gripper_coords.yml). I wonder if retraining the network using gripper coordinates for TIAGo's gripper would improve the generated grasps. I think that one of the things it would improve would be the grasps generated based on the gripper width constraints as the panda gripper is wider than TIAGO's gripper.
Thanks in advance!
Hi!
Yes, the object-wise grasp annotations were obtained from physics simulation using the Pandas gripper. However, if you want to use another gripper you still have several options:
- You can recreate the table top training scenes while using your TIAGo gripper as a collision mesh and then retrain (I did that successfully for a RobotiQ gripper). This allows to filter out grasps that are too wide before training. You need to make sure that the meshes of both grippers are properly aligned so that they grasp at the same contacts. Therefore, you should visualize the transformed gripper mesh in the table top scenes. To use another gripper, just insert it as an argument:
python tools/create_table_top_scenes.py /path/to/acronym --gripper_path /path/to/tiago/gripper -vis
- Alternatively, a much easier adaptation is to simply scale the incoming point cloud by the ratio of the gripper widths. Thereby, you can let the extent of objects appear larger to the network and avoid infeasible grasps due to the smaller gripper width. Of course, you need to scale the grasp pose predictions back as well.
Hi again! :)
Thank you for your help! sorry for the late response, but it's now when I'm working on the data generation and training of contact-graspnet.
I have a question regarding the first option that you suggested me. From your comment, it looks like I would only have to create the table top scenes with the TIAGo gripper. However, I believe that I need to generate the mesh_contacts through create_contact_infos.py first to generate the table top scenes. Is this correct? if so, then I must write my own gripper object (let's say TiagoGripper) like PandaGripper object.
Hi :)
Yes, you need the mesh_contacts
but I would suggest to still use the ones from the Pandas Gripper and only change the collision model during the table top scene generation. If the TIAGo gripper width is equal or smaller and the gripper depth does not differ extremely, we can just savely discard the newly colliding grasps. Then you don't need to implement your own Gripper Class, but of course you can also go that way if you know what you are doing.
I uploaded the panda mesh_contacts
to google drive to save you some computation, but you will still need the meshes from ShapeNet for the table top scene generation as described in the README.
Let me know how it goes.
Thank you!
Indeed, TIAGo's gripper width is even a bit wider than Panda's. I will do a first iteration with the Panda mesh_contacts
that you provided me with, that really saves me some time :)
I will let you know how it goes.
Hi,
I attempted to train the network with the table top scenes that I generated using the TIAGo gripper. I'm running into an assertion error when evaluating after the first epoch:
Traceback (most recent call last):
File "contact_graspnet/train.py", line 228, in <module>
train(global_config, ckpt_dir)
File "contact_graspnet/train.py", line 121, in train
eval_validation_scenes(sess, ops, summary_ops, file_writers, pcreader)
File "contact_graspnet/train.py", line 185, in eval_validation_scenes
assert scene_idx[0] == (pcreader._num_train_samples + batch_idx)
AssertionError
********** terminating renderer **************
I am currently debugging. I printed out the value for scene_index
, here I show only a few iterations of the for loop in eval_validation_scenes()
until it fails. I don't understand why scene_idx[0] = 0
.
pcreader._num_train_samples = 9005
batch_idx = 126
scene_idx[0] = 9131
scene_idx = [9131]
pcreader._num_train_samples = 9005
batch_idx = 127
scene_idx[0] = 9132
scene_idx = [9132]
pcreader._num_train_samples = 9005
batch_idx = 128
scene_idx[0] = 9133
scene_idx = [9133]
pcreader._num_train_samples = 9005
batch_idx = 129
scene_idx[0] = 0
scene_idx = [0]
I wonder if there might be something wrong with my data. I am generating the table top scenes again because I modified TIAGo's gripper, so I will try training again with the new scenes. However, I would appreciate if you have any suggestions based on the information I'm presenting.
Thanks!
Hey @jucamohedano
Sorry for the late answer, could you already fix your issue?
Hi!
Sorry I didn't update my question. After going over the process of re-generating the data I was able to train for the 15 epochs with the default values of the model defined in config.yaml
. I made a slight modification to Tiago's gripper position in blender that resulted in higher number of grasps to train the model with, instead of filtering many of them (which was what happened in my first try). The screenshots below show the comparisons between both grippers. The first image shows how I used the tiago's gripper to generate the table top scenes (obviously without the panda gripper in collision with Tiago's, my intention there is to show the alignment between both grippers).
However, it looks like I'm unsuccessful because after calling predict_grasps
using the weights from my training, I get an empty array of pred_grasps_cam
. The output from select_grasps
call in function predict_grasps
returns an empty array as well. I am playing with the threshold to see if that's the problem but that isn't working. I'm trying to understand this part of the code at the moment, as well as using the weights that you provide from your training on the panda gripper for comparison.
hi @jucamohedano ,
Do you successfully retrain the model using the Tiago gripper?
best regards,
xiaolin
Hi! @xlim1996 Sorry for the super late reply. But I didn't end up retraining the model. I modified the output to be sufficient for TIAGo's gripper. If I had more time, I would've tried to load the data in chunks and training.