henry9709 / pytorch_Realtime_Multi-Person_Pose_Estimation

This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pytorch_Realtime_Multi-Person_Pose_Estimation

This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation

Introduction

Code repo for reproducing 2017 CVPR Oral paper using pytorch.

Results

License

Require

  1. Pytorch
  2. Caffe is required if you want convert caffe model to a pytorch model.
  3. pip install pycocotools
  4. pip install tensorboardX
  5. pip install torch-encoding

Demo

  • Download converted pytorch model.
  • cd network/caffe_to_pytorch; python convert.py to convert a trained caffe model to pytorch model. The converted model have relative error less than 1e-6, and will be located in ./network/weight after convert.
  • Or use the model trained from scratch in this repo, which has better accuracy on the validataion set.
  • python demo/picture_demo.py to run the picture demo.
  • python demo/web_demo.py to run the web demo.

Evalute

  • python evaluate/evaluation.py to evaluate the model on images seperated by the original author
  • It should have mAP 0.598 for the original rtpose, original repo have mAP 0.577 because we do left and right flip for heatmap and PAF for the evaluation. c

Pretrained Models & Performance on the dataset split by the original rtpose.

rtpose original, trained from scratch (Notice the preprocessing is different for different models)

Reported on paper (VGG19) mAP in this repo (VGG19) Trained from scratch in this repo
0.577 0.598 0.614

Training

  • cd training; bash getData.sh to obtain the COCO images in dataset/COCO/images/, keypoints annotations in dataset/COCO/annotations/
  • Download the mask of the unlabeled person at Dropbox
  • Download the official training format at Dropbox
  • python train_VGG19.py --batch_size 100 --logdir {where to store tensorboardX logs}
  • python train_ShuffleNetV2.py --batch_size 160 --logdir {where to store tensorboardX logs}
  • python train_SH.py --batch_size 64 --lr 0.1 --logdir {where to store tensorboardX logs}

Related repository

Network Architecture

  • testing architecture Teaser?

  • training architecture Teaser?

Contributions

All contributions are welcomed. If you encounter any issue (including examples of images where it fails) feel free to open an issue.

Citation

Please cite the paper in your publications if it helps your research:

@InProceedings{cao2017realtime,
  title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},
  author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2017}
  }
  
@INPROCEEDINGS{8486591, 
author={H. Wang and W. P. An and X. Wang and L. Fang and J. Yuan}, 
booktitle={2018 IEEE International Conference on Multimedia and Expo (ICME)}, 
title={Magnify-Net for Multi-Person 2D Pose Estimation}, 
year={2018}, 
volume={}, 
number={}, 
pages={1-6}, 
month={July},}

About

This is a pytorch version of Realtime_Multi-Person_Pose_Estimation, origin code is here https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation


Languages

Language:Python 99.6%Language:Shell 0.4%