The original code is from "https://github.com/matterport/Mask_RCNN" on Python 3, Keras, and TensorFlow. The code reproduce the work of "https://arxiv.org/abs/1703.06870" for human pose estimation. This project aims to addressing the issue#2. When I start it, I refer to another project by @RodrigoGantier .
- Python 3.5+
- TensorFlow 1.4+
- Keras 2.0.8+
- Jupyter Notebook
- Numpy, skimage, scipy, Pillow, cython, h5py
- inference_humanpose.ipynb shows how to predict the keypoint of human using my trained model. It randomly chooses a image from the validation set. You can download pre-trained COCO weights for human pose estimation (mask_rcnn_coco_humanpose.h5) from the releases page (https://github.com/Superlee506/Mask_RCNN_Humanpose/releases).
- train_humanpose.ipynb shows how to train the model step by step. You can also use "python train_humanpose.py" to start training.
- inspect_humanpose.ipynb visulizes the proposal target keypoints to check it's validity. It also outputs some innner layers to help us debug the model.
- demo_human_pose.ipynb A new demo for images input from the "images" folder. [04-11-2018]
- video_demo.py A new demo for video input from camera.[04-11-2018]