My implementation of paper "Graspness Discovery in Clutters for Fast and Accurate Grasp Detection" (ICCV 2021).
- Python 3
- PyTorch 1.8
- Open3d 0.8
- TensorBoard 2.3
- NumPy
- SciPy
- Pillow
- tqdm
- MinkowskiEngine
Get the code.
git clone https://github.com/rhett-chen/graspness_implementation.git
cd graspnet-graspness
Install packages via Pip.
pip install -r requirements.txt
Compile and install pointnet2 operators (code adapted from votenet).
cd pointnet2
python setup.py install
Compile and install knn operator (code adapted from pytorch_knn_cuda).
cd knn
python setup.py install
Install graspnetAPI for evaluation.
git clone https://github.com/graspnet/graspnetAPI.git
cd graspnetAPI
pip install .
For MinkowskiEngine, please refer https://github.com/NVIDIA/MinkowskiEngine
Point level graspness label are not included in the original dataset, and need additional generation. Make sure you have downloaded the orginal dataset from GraspNet. The generation code is in dataset/generate_graspness.py.
cd dataset
python generate_graspness.py --dataset_root /data3/graspnet --camera_type kinect
original dataset grasp_label files have redundant data, We can significantly save the memory cost. The code is in dataset/simplify_dataset.py
cd dataset
python simplify_dataset.py --dataset_root /data3/graspnet
Training examples are shown in command_train.sh. --dataset_root
, --camera
and --log_dir
should be specified according to your settings. You can use TensorBoard to visualize training process.
Testing examples are shown in command_test.sh, which contains inference and result evaluation. --dataset_root
, --camera
, --checkpoint_path
and --dump_dir
should be specified according to your settings. Set --collision_thresh
to -1 for fast inference.
If you need the trained weights, you can contact me directly.
Results "In repo" report the model performance of my results without collision detection.
Evaluation results on Kinect camera:
Seen | Similar | Novel | |||||||
---|---|---|---|---|---|---|---|---|---|
AP | AP0.8 | AP0.4 | AP | AP0.8 | AP0.4 | AP | AP0.8 | AP0.4 | |
In paper | 61.19 | 71.46 | 56.04 | 47.39 | 56.78 | 40.43 | 19.01 | 23.73 | 10.60 |
In repo | 61.83 | 73.28 | 54.14 | 51.13 | 62.53 | 41.57 | 19.94 | 24.90 | 11.02 |
If you meet the torch.floor error in MinkowskiEngine, you can simply solve it by changing the source code of MinkowskiEngine: MinkowskiEngine/utils/quantization.py 262,from discrete_coordinates =_auto_floor(coordinates) to discrete_coordinates = coordinates
My code is mainly based on Graspnet-baseline https://github.com/graspnet/graspnet-baseline.