Meowuu7 / Point-Transformer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pytorch Implementation of Point Transformer

Classification

Data Preparation

Run

  • Please use the following command to train and evaluate point-transformer on ModelNet40 for shape classification task:

    python3 main_training.py --num_feat=${num_feat} \ # number of features used for points in the shape cloud
    --device=${device} \ # which gpu to use in the training process
    --batch_size=${batch_size} \ # batch size
    --dp_ratio=${dropout_ratio} \ # dropout ratio
    --attn_mult=${attn_mult} \ # for point-transformer layer
    --task=cls \ # set task to shape classification
    [--use_sgd] \ # use SGD or Adam optimizer
    [--use_abs_pos] \ # whether to use absolute position information in point-transformer layer
    [--with_normal] \ # whether to use points' normal information in point-transformer layer
    [--more_aug] # whether to use more data augmentation in training process
  • Please use the following command to train and evaluate point-transformer on S3IDS for semantic segmentation task:

    python3 main_training.py --num_feat=9 \ # number of features used for points in the shape cloud
    --device=${device} \ # which gpu to use in the training process
    --batch_size=${batch_size} \ # batch size
    --dp_ratio=${dropout_ratio} \ # dropout ratio
    --attn_mult=${attn_mult} \ # for point-transformer layer
    --task=sem_seg \ # set task to semantic segmentation
    [--use_sgd] \ # use SGD or Adam optimizer
    [--use_abs_pos] \ # whether to use absolute position information in point-transformer layer
    [--with_normal] \ # whether to use points' normal information in point-transformer layer
    [--more_aug] # whether to use more data augmentation in training process

Results

  • Use SGD with initial learning rate 0.001 and decay by 0.1 in epoch 120 and 160, train for 200 epochs; batch size is set to 16; dropout lyaers are added between each two fully-connected layers with the dropout ratio set to 0.4, using absolute position information in the point-transformer layers.

  • Instance classification accuracy on ModelNet40 are as follows:

    OA
    Paper 93.7%
    Ours 92.6%
  • Semantic segmentation overall accuracy on S3IDS are as follows:

    OA
    Paper 90.8%
    Ours 86.0%
  • (The performance can be probably improved if the dropout ratio, initial learning rate and the learning rate schduler could be further fine-tuned.)

Miscellaneous

We look forward to authors of Point Transformer releasing their official implementation, training pipelines and trained weights.

About

License:MIT License


Languages

Language:Python 80.2%Language:Cuda 11.1%Language:C++ 7.6%Language:C 1.1%