NVlabs / contact_graspnet

Efficient 6-DoF Grasp Generation in Cluttered Scenes

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

inference runtime

harryzhangOG opened this issue · comments

Hi, thx for the code. May I ask how much time it usually takes to run inference on a pcd of 10k points? for some reason it is super slow for us and we suspect it was because the GPUs are not utilized (we have two 3090s). Then we updated the TF version to 2.5 but it would cause conflicts in the PointNet2 implementation. Is there a way to get around this? thanks.

The runtime is pretty independent from the number of points in the original point cloud since it is super/sub-sampled to 20000 points by default. If you insert the full point cloud at once without segmentation masks, you should get around 0.2s inference time with one forward pass and not much more with several forward passes. With segmentation masks it could take slightly longer, like 0.3s.

I succeeded in updating the code to TF 2.5 and CUDA 11.2. So it is definitely possible but you need to rebuild the pointnet operations: https://github.com/NVlabs/contact_graspnet/blob/main/pointnet2/tf_ops/HowTO.md

Maybe the dual GPU setup causes some troubles as the CUDA_VISIBLE_DEVICE is set to 0 in the code. It probably only works with the first GPU. But I don't have time to dig into the details.