littlepure2333 / simple-HRNet

Multi-person Human Pose Estimation with HRNet in Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Multi-person Human Pose Estimation with HRNet in PyTorch

This is an unofficial implementation of the paper Deep High-Resolution Representation Learning for Human Pose Estimation.
The code is a simplified version of the official code with the ease-of-use in mind.

The code is fully compatible with the official pre-trained weights and the results are the same of the original implementation (only slight differences on gpu due to CUDA).

This repository provides:

  • A simple HRNet implementation in PyTorch (>=1.0) - compatible with official weights.
  • A simple class (SimpleHRNet) that loads the HRNet network for the human pose estimation, loads the pre-trained weights, and make human predictions on a single image or a batch of images.
  • Multi-person support with YOLOv3 (enabled by default).
  • A reference code that runs a live demo reading frames from a webcam or a video file.
  • A relatively-simple code for training and testing the HRNet network.
  • A specific script for training the network on the COCO dataset.

Class usage

import cv2
from SimpleHRNet import SimpleHRNet

model = SimpleHRNet(48, 17, "./weights/pose_hrnet_w48_384x288.pth")
image = cv2.imread("image.png", cv2.IMREAD_COLOR)

joints = model.predict(image)

Running the live demo

From a connected camera:

python scripts/live-demo.py --camera_id 0

From a saved video:

python scripts/live-demo.py --filename video.mp4

For help:

python scripts/live-demo.py --help

Running the training script

python scripts/train_coco.py

For help:

python scripts/train_coco.py --help

Requirements

  • Install the required packages
    pip install -r requirements.txt
  • Download the official pre-trained weights from https://github.com/leoxiaobin/deep-high-resolution-net.pytorch
  • For multi-person support:
    • Get YOLOv3:
      • Clone YOLOv3 in the folder ./models/detectors and change the folder name from PyTorch-YOLOv3 to yolo OR
      • Update git submodules
        git submodule update --init --recursive
    • Install YOLOv3 required packages
      pip install -r requirements.txt (from folder ./models/detectors/yolo)
    • Download the pre-trained weights running the script download_weights.sh from the weights folder
    • (Optional) Download the COCO dataset and save it in ./datasets/COCO
    • Your folders should look like:
      simple-HRNet
      ├── datasets                (datasets - for training only)
      │  └── COCO                 (COCO dataset)
      ├── losses                  (loss functions)
      ├── misc                    (misc)
      │  └── nms                  (CUDA nms module - for training only)
      ├── models                  (pytorch models)
      │  └── detectors            (people detectors)
      │    └── yolo               (PyTorch-YOLOv3 repository)
      │      ├── ...
      │      └── weights          (YOLOv3 weights)
      ├── scripts                 (scripts)
      ├── testing                 (testing code)
      ├── training                (training code)
      └── weights                 (HRnet weights)
      
    • If you want to run the training script on COCO scripts/train_coco.py, you have to build the nms module first.
      Please note that a linux machine with CUDA is currently required. Built it with either:
      • cd misc; make or
      • cd misc/nms; python setup_linux.py build_ext --inplace

About

Multi-person Human Pose Estimation with HRNet in Pytorch

License:GNU General Public License v3.0


Languages

Language:Python 99.9%Language:Makefile 0.1%