lyhsieh / SPHP

Code for BMVC'23: "Sparse and Privacy-enhanced Representation for Human Pose Estimation"

Home Page:https://lyhsieh.github.io/sphp/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SPHP: Sparse and Privacy-enhanced Representation for Human Pose Estimation


Sparse and Privacy-enhanced Representation for Human Pose Estimation
Ting-Ying Lin*1, Lin-Yung Hsieh*1, Fu-En Wang1, Wen-Shen Wuen2, Min Sun1,
1National Tsing Hua University, 2Novatek Microelectronics Corp. *denotes equal contribution
in BMVC 2023

Setup

  1. We recommend using Anaconda to setup the environment.

    conda create --name sphp python=3.7
  2. Install torch, torchvision, torchaudio from PyTorch official site according to your CUDA version.

  3. Modify dataset path in config.yaml. Change line 4 and line 11 to your path.

    dataset_path: &dataset_path "PATH_TO_LABEL_DATA"    # Line 4
    calib_path: &calib_path "PATH_TO_calibrate.npy"    # Line 11
  4. Install other required libraries in requirements.txt.

    pip install -r requirements.txt
  5. If you want to utilize submanifold sparse convolution, please follow the setup instructions at facebookresearch/SparseConvNet to install sparse convolution.

Dataset Download

  1. Fill out the Google Form and sign the agreement.

  2. We will share you the link of SPHP dataset. (It may take a few days.)

  3. Download the dataset and put Master.tar.gz and Slave.tar.gz under the data folder.

  4. Unzip the dataset.

    cd data
    tar zxvf Master.tar.gz
    tar zxvf Slave.tar.gz

    Now, it should look like this.

    .
    ├── data
    │   ├── calibrate.npy
    │   ├── Master
    │   │   ├── s01
    │   │   │   ├── 01
    │   │   │   │   ├── EDG 
    │   │   │   │   │   └── contains 300 png files (edge images)
    │   │   │   │   ├── MVH 
    │   │   │   │   │   └── contains 300 png files (horizontal motion vector)
    │   │   │   │   ├── MVV 
    │   │   │   │   │   └── contains 300 png files (vertical motion vector)
    │   │   │   │   └── pose_change 
    │   │   │   │       └── contains 300 npy files (ground truth labels)
    │   │   │   ├── 02
    │   │   │   ├── ...
    │   │   │   └── 16
    │   │   ├── s02
    │   │   ├── ...
    │   │   └── s16
    │   └── Slave
    ├── template
    │   ├── ...
    │   └── ...
    └── Utils
        ├── ...
        └── ...

    The file structure in Slave should be the same to Master.

Training

Basic usage

  1. Choose a specification in template folder. The folders end with _submanifold utilize sparse convolution. Other folders use traditional convolution. Take edge modality under Unet backbone using sparse convolution as example.

    cd template/SPHP_Unet_edge_submanifold/
  2. Run main.py

    python main.py --mode train

Parser arguments

  • --mode: training or testing mode, default mode is training.

  • --batch_size: set batch size, default batch size is 32.

  • --device: number of available GPUs, default value is 1.

    Train on particular GPU

    To train on a particular GPU, insert the CUDA_VISIBLE_DEVICES before executing the command. Ensure consistency with the --device configuration.

    CUDA_VISIBLE_DEVICES=0,1,3 python main.py --mode train --device 3 --batch_size 64

Testing

  1. Modify config.yaml. Change validation subject in line 22 to test subject in line 23.

    # Line 22 (use these subject for training)
    subject: ['s06','s07','s08','s16','s17','s18','s26','s27','s28','s36','s37','s38'] 
    
    # Line 23 (use these subject for testing)
    subject: ['s09','s10','s19','s20','s29','s30','s39','s40'] 
  2. Run main.py

    python main.py --mode val

Citation

@inproceedings{lin2023sparse,
    title     = {Sparse and Privacy-enhanced Representation for Human Pose Estimation},
    author    = {Lin, Ting-Ying and Hsieh, Lin-Yung and Wang, Fu-En and Wuen, Wen-Shen and Sun, Min},
    booktitle = {British Machine Vision Conference (BMVC)},
    year      = {2023},
}

About

Code for BMVC'23: "Sparse and Privacy-enhanced Representation for Human Pose Estimation"

https://lyhsieh.github.io/sphp/


Languages

Language:Python 100.0%