ZitongYu / PhysFormer

PhysFormer CVPR2022

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PhysFormer

Main code of CVPR2022 paper "PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer" [.pdf]

image

module load pytorch/1.9

pip install --user imgaug

Training on VIPL-HR:

python train_Physformer_160_VIPL.py

Testing on One sample on VIPL-HR:

  1. Download the test data [Google Drive]
  2. Run the model inference code (with trained checkpoint 'Physformer_VIPL_fold1.pkl' [Google Drive]) to get the predicted rPPG signal clips:
python inference_OneSample_VIPL_PhysFormer.py
  1. Calculate the HR error with the file 'Inference_HRevaluation.m' using Matlab (You can also easily use python script to implement it).

Citation

If you find it is useful in your research, please cite:

     @inproceedings{yu2021physformer,
        title={PhysFormer: Facial Video-based Physiological Measurement with Temporal Difference Transformer},
        author={Yu, Zitong and Shen, Yuming and Shi, Jingang and Zhao, Hengshuang and Torr, Philip and Zhao, Guoying},
        booktitle={CVPR},
        year={2022}
      }
      
      @article{yu2023physformer++,
       title={PhysFormer++: Facial Video-based Physiological Measurement with SlowFast Temporal Difference Transformer},
       author={Yu, Zitong and Shen, Yuming and Shi, Jingang and Zhao, Hengshuang and Cui, Yawen and Zhang, Jiehua and Torr, Philip and Zhao, Guoying},
       journal={International Journal of Computer Vision (IJCV)},
       pages={1--24},
       year={2023}
     }

If you use the VIPL-HR datset, please cite:

     @article{niu2019rhythmnet,
       title={Rhythmnet: End-to-end heart rate estimation from face via spatial-temporal representation},
       author={Niu, Xuesong and Shan, Shiguang and Han, Hu and Chen, Xilin},
       journal={IEEE Transactions on Image Processing},
       year={2019}
     }

About

PhysFormer CVPR2022

License:MIT License


Languages

Language:Python 95.2%Language:MATLAB 4.0%Language:Shell 0.8%