BernieZhu / Mask_RCNN_Humanpose

Mask R-CNN for Human Pose Estimation on Keras and TensorFlow.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Mask RCNN for Human Pose Estimation

This repository includes the codes for evaluation, with some modifications to make most of the functions in the original codes work well for Keypoint Detection task.

The original code is from "https://github.com/Superlee506/Mask_RCNN_Humanpose" and "https://github.com/matterport/Mask_RCNN" on Python 3, Keras, and TensorFlow. The code reproduce the work of "https://arxiv.org/abs/1703.06870" for human pose estimation.

Problems

  • Low performance. The visualization of the keypoint detection seems okay but the evaluation results are much lower than the paper shows. I have tried trainning several times and the results were almost the same.
  • NOT supporting Multi-GPUs.

However RodrigoGantier's project has the following problems:

  • It's codes have few comments and still use the oringal names from @Matterport's project, which make the project hard to understand.
  • When I trained this model, I found it's hard to converge as described in issue#3.

Requirements

  • Python 3.5+
  • TensorFlow 1.4+
  • Keras 2.0.8+
  • Jupyter Notebook
  • Numpy, skimage, scipy, Pillow, cython, h5py

Getting Started

  • Please search "/home" to change the paths before you run any codes.
  • inference_humanpose.ipynb shows how to predict the keypoint of human using my trained model. It randomly chooses a image from the validation set. You can download pre-trained COCO weights for human pose estimation (mask_rcnn_coco_humanpose.h5) from the releases page (https://github.com/Superlee506/Mask_RCNN_Humanpose/releases).
  • train_humanpose.ipynb shows how to train the model step by step. You can also use "python train_humanpose.py" to start training.
  • inspect_humanpose.ipynb visulizes the proposal target keypoints to check it's validity. It also outputs some innner layers to help us debug the model.
  • demo_human_pose.ipynb A new demo for images input from the "images" folder. [04-11-2018]
  • video_demo.py A new demo for video input from camera.[04-11-2018]

Evaluation

COCO 2017 Keypoint Detection Task(http://cocodataset.org/#keypoints-2017)
person_keypoints_val2017.json
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.204
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.564
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.100
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.182
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.253
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.277
Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.642
Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.202
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.232
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.338

Discussion

  • I convert the joint coordinates into an integer label ([0, 56*56)), and use tf.nn.sparse_softmax_cross_entropy_with_logits as the loss function. This refers to the original Detectron code which is key reason why my loss can converge quickly.
  • If you still want to use the keypoint mask as output, you'd better adopt the modified loss function proposed by @QtSignalProcessing in issue#2. Because after crop and resize, the keypoint masks may hava more than one 1 values, and this will make the original soft_cross entropy_loss hard to converge.
  • Althougth the loss converge quickly, the prediction results isn't as good as the oringal papers, especially for right or left shoulder, right or left knee, etc. I'm confused with it, so I release the code and any contribution or suggestion to this repository is welcome.

About

Mask R-CNN for Human Pose Estimation on Keras and TensorFlow.

License:Other


Languages

Language:Jupyter Notebook 96.6%Language:Python 3.4%