pvnieo / vader

Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023

Home Page:https://arxiv.org/abs/2303.15104

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

๐Ÿ‘พ VADER ๐Ÿ‘พ

Paper

Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023

You underestimate the power of the local side!


๐Ÿ‘ท Installation

  • Install Dependencies: This implementation requires Python 3.7 or newer. Install dependencies using pip:
pip install -r requirements.txt
  • Install DiffVoxel: Navigate to the diffvoxel folder and execute:
python setup.py bdist_wheel
pip install --upgrade dist/diffvoxel-0.0.1-*.whl
  • Install PointNet2: Navigate to the Pointnet2_PyTorch/pointnet2_ops_lib folder and execute:
python setup.py bdist_wheel
pip install --upgrade dist/pointnet2_ops-3.0.0-*.whl (or the version you have)

๐Ÿ“– Usage

In this repository, we provide the code for pre-training our network to learn local features that can generalizable across different shapes categories, As well as the code for extracting the VADER features used in downstream tasks.

Our paper presents new insights into the transferability of features from networks trained on non-deformable shapes. Once the network is pretrained (we provide pretrained weights), VADER features can be extracted and used as replacements for traditional input features (like XYZ or HKS) in any downstream task.

For all experiments, we adapted the code from Diffusion-Net, by substituting their input features with our VADER features. Visit their repository for detailed usage instructions.

  • Architecture Code: Located in the UPDesc folder.

  • Pretrained Models: Two models pretrained on the 3DMatch dataset are provided in the UPDesc/demo/trained_models folder, one using supervised NCE loss and the other using unsupervised cycle loss.

  • Extracting VADER Features: Use the extract_vader.py script in UPDesc/demo/ as follows:

    python3 extract_vader.py --model UPDescUniScale --ckpt ./trained_models/name_of_pretrained_model/weights.ckpt --hparams ./trained_models/name_of_pretrained_model/hparams.yaml --data_root ./path/to/data --scale 6.0 --out_root ./path/to/save

    where the scale parameter is the scale by which the receptive field of the network is multiplied. This can either be found using the optimization method using the MMD loss as described in the paper, or empirically (we found that scales between 5 and 6.5 works better for area normalized human shapes, and scales between 4 and 6 works better for L2 normalized RNA shapes).

๐Ÿ“ˆ Results

If you wish to report our result, we have summarized them below. Our method is referred to as VADER. X on Y indicates that the method was trained on dataset X and tested on dataset Y.

  • Near Isometric Shape Matching: We provide results on the FAUST (F), Scape (S) and Shrec (SH) datasets. We used the remeshed version of the datasets. We report the mean geodesic error, following the protocol used in all deep functional map papers. Our method is unsupervised.

    Method F on F S on S F on S S on F F on SH S on SH
    VADER 3.9 4.2 4.1 3.9 6.4 6.9
  • Molecular Surface Segmentation: We provide results on the RNA molecules dataset. We report the mean Accuracy, following the same protocol as in the original paper. Our method is supervised. We provide the results for training on the full dataset, on only 50 shapes, and on only 100 shapes.

    Method Full Dataset 50 Shapes 100 Shapes
    VADER 92.6 ยฑ 0.02% 83.2 ยฑ 0.20% 86.8 ยฑ 0.09%
  • Partial Animal Matching: We provide results on the SHREC16โ€™ Cuts dataset. We report the mean geodesic error, following the same protocol as in all the deep functional maps papers. Our method is supervised.

    Method SHREC16โ€™ Cuts dataset
    VADER 3.7

๐ŸŽ“ Citation

If you find this work useful in your research, please consider citing:

@inproceedings{attaiki2023vader,
    title={Generalizable Local Feature Pre-training for Deformable Shape Analysis},
    author={Souhaib Attaiki and Lei Li and Maks Ovsjanikov},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2023}
}

About

Pytorch code for "Generalizable Local Feature Pre-training for Deformable Shape Analysis" - CVPR 2023

https://arxiv.org/abs/2303.15104

License:GNU General Public License v2.0


Languages

Language:Python 65.8%Language:Cuda 22.3%Language:C++ 10.8%Language:C 1.2%