jwu76033 / convolutional-pose-machines-release

Code repository for Convolutional Pose Machines, CVPR'16

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Convolutional Pose Machines

Shih-En Wei, Varun Ramakrishna, Takeo Kanade, Yaser Sheikh, "Convolutional Pose Machines", CVPR 2016.

This project is licensed under the terms of the GPL v2 license. By using the software, you are agreeing to the terms of the license agreement.

Contact: Shih-En Wei (weisteady@gmail.com)

Teaser?

Recent Updates

  • Synced our fork of caffe with most recent version (Dec. 2016) so that Pascal GPUs can work (tested with CUDA 8.0 and CUDNN 5).
  • Including a VGG-pretrained model in matlab (and also python) code. This model was used in CVPR'16 demo. It scores 90.1% on MPI test set, and can be trained in much shorter time than previous models.
  • We are working on releasing code of our new work in multi-person pose estimation demonstrated in ECCV'16 (best demo award!).

Before Everything

  • Watch some videos.
  • Install Caffe. If you are interested in training this model on your own machines, or realtime systems, please use our version (a submodule in this repo) with customized layers. Make sure you have compiled python and matlab interface. This repository at least runs on Ubuntu 14.04, OpenCV 2.4.10, CUDA 8.0, and CUDNN 5. The following assumes you use cmake to compile caffe in <repo path>/caffe/build. [//]: # (- Copy caffePath.cfg.example to caffePath.cfg and set your own path in it.)
  • Include <repo path>/caffe/build/install/lib in environment variable $LD_LIBRARY_PATH.
  • Include <repo path>/caffe/build/install/python in environment variable $PYTHONPATH.

Testing

First, run testing/get_model.sh to retreive trained models from our web server.

Python

  • This demo file shows how to detect multiple people's poses as we demonstrated in CVPR'16. For real-time performance, please read it for further explanation.

Matlab

    1. CPM_demo.m: Put the testing image into sample_image then run it! You can select models (we provided 4) or other parameters in config.m. If you just want to try our best-scoring model, leave them default.
    1. CPM_benchmark.m: Run the model on test benchmark and see the scores. Prediction files will be saved in testing/predicts.

Training

  • Run get_data.sh to get datasets including FLIC Dataset, LEEDS Sport Dataset and its extended training set, and MPII Dataset.
  • Run genJSON(<dataset_name>) to generate a json file in training/json/ folder (you'll have to create it). Dataset name can be MPI, LEEDS, or FLIC. The json files contain raw informations needed for training from each individual dataset.
  • Run python genLMDB.py to generate LMDBs for CPM data layer in our caffe. Change the main function to select dataset, and note that you can generate a LMDB with multiple datasets.
  • Run python genProto.py to get prototxt for caffe. Read further explanation for layer parameters.
  • Train with generated prototxts and collect caffemodels.

Related Repository

Citation

Please cite CPM in your publications if it helps your research:

@inproceedings{wei2016cpm,
    author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},
    booktitle = {CVPR},
    title = {Convolutional pose machines},
    year = {2016}
}

About

Code repository for Convolutional Pose Machines, CVPR'16

License:Other


Languages

Language:Jupyter Notebook 74.8%Language:MATLAB 23.2%Language:Python 1.9%Language:Shell 0.2%