ZhengyiLuo / HybrIK

Official code of "HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation", CVPR 2021

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

HybrIK

PyTorch Google Colab

This repo contains the code of our paper:

HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation

Jiefeng Li, Chao Xu, Zhicun Chen, Siyuan Bian, Lixin Yang, Cewu Lu

[Paper] [Supplementary Material] [arXiv] [Project Page]

In CVPR 2021

hybrik


Twist-and-Swing Decomposition

News 🚩

[2022/12/03] HybrIK for Blender add-on is now available for download. The output of HybrIK can be imported to Blender and saved as fbx.

[2022/08/16] Pretrained model with HRNet-W48 backbone is available.

[2022/07/31] Training code with predicted camera is released.

[2022/07/25] HybrIK is now supported in Alphapose! Multi-person demo with pose-tracking is available.

[2022/04/27] Google Colab is ready to use.

[2022/04/26] Achieve SOTA results by adding the 3DPW dataset for training.

[2022/04/25] The demo code is released!

TODO

  • Provide pretrained model
  • Provide parsed data annotations

Installation instructions

# 1. Create a conda virtual environment.
conda create -n hybrik python=3.8 -y
conda activate hybrik

# 2. Install PyTorch
conda install pytorch==1.9.1 torchvision==0.10.1 -c pytorch

# 3. Install PyTorch3D (Optional, only for visualization)
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install -c bottler nvidiacub
pip install git+ssh://git@github.com/facebookresearch/pytorch3d.git@stable

# 4. Pull our code
git clone https://github.com/Jeff-sjtu/HybrIK.git
cd HybrIK

# 5. Install
pip install pycocotools
python setup.py develop  # or "pip install -e ."

Download models

  • Download the SMPL model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here at common/utils/smplpytorch/smplpytorch/native/models.
  • Download our pretrained model (paper version) from [ Google Drive | Baidu (code: qre2) ].
  • Download our pretrained model (with predicted camera) from [ Google Drive | Baidu (code: 4qyv) ].

Demo

First make sure you download the pretrained model (with predicted camera) and place it in the ${ROOT} directory, i.e., ./pretrained_hrnet.pth.

  • Visualize HybrIK on videos (run in single frame) and save results:
python scripts/demo_video.py --video-name examples/dance.mp4 --out-dir res_dance --save-pk

The saved results in ./res_dance/res.pk can be imported to Blender with our add-on.

  • Visualize HybrIK on images:
python scripts/demo_image.py --img-dir examples --out-dir res

Fetch data

Download Human3.6M, MPI-INF-3DHP, 3DPW and MSCOCO datasets. You need to follow directory structure of the data as below. Thanks to the great job done by Moon et al., we use the Human3.6M images provided in PoseNet.

|-- data
`-- |-- h36m
    `-- |-- annotations
        `-- images
`-- |-- pw3d
    `-- |-- json
        `-- imageFiles
`-- |-- 3dhp
    `-- |-- annotation_mpi_inf_3dhp_train.json
        |-- annotation_mpi_inf_3dhp_test.json
        |-- mpi_inf_3dhp_train_set
        `-- mpi_inf_3dhp_test_set
`-- |-- coco
    `-- |-- annotations
        |   |-- person_keypoints_train2017.json
        |   `-- person_keypoints_val2017.json
        |-- train2017
        `-- val2017
  • Download Human3.6M parsed annotations. [ Google | Baidu ]
  • Download 3DPW parsed annotations. [ Google | Baidu ]
  • Download MPI-INF-3DHP parsed annotations. [ Google | Baidu ]

Train from scratch

./scripts/train_smpl_cam.sh test_3dpw configs/256x192_adam_lr1e-3-res34_smpl_3d_cam_2x_mix_w_pw3d.yaml

Evaluation

Download the pretrained model (ResNet-34 or HRNet-W48).

./scripts/validate_smpl_cam.sh ./configs/256x192_adam_lr1e-3-hrw48_cam_2x_w_pw3d_3dhp.yaml ./pretrained_hrnet.pth

Results

Method 3DPW Human3.6M
SPIN 59.2 41.1
VIBE 56.5 41.5
VIBE w. 3DPW 51.9 41.4
PARE 49.3 -
PARE w. 3DPW 46.4 -
HybrIK (ResNet-34) 48.8 34.5
HybrIK (ResNet-34) w. 3DPW 45.3 36.3

MODEL ZOO

Backbone Training Data PA-MPJPE (3DPW) MPJPE (3DPW) PA-MPJPE (Human3.6M) MPJPE (Human3.6M) Download Config
ResNet-34 w/o 3DPW model cfg
ResNet-34 w/ 3DPW 44.5 72.4 33.8 55.5 model cfg
HRNet-W48 w/o 3DPW 48.6 88.0 29.5 50.4 model cfg
HRNet-W48 w/ 3DPW 41.8 71.3 29.8 47.1 model cfg

Notes

  • All models assume a fixed focal length and predict camera parameters.
  • Flip test is used by default.

Citing

If our code helps your research, please consider citing the following paper:

@inproceedings{li2021hybrik,
    title={Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation},
    author={Li, Jiefeng and Xu, Chao and Chen, Zhicun and Bian, Siyuan and Yang, Lixin and Lu, Cewu},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={3383--3393},
    year={2021}
}

About

Official code of "HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D Human Pose and Shape Estimation", CVPR 2021

License:MIT License


Languages

Language:Python 99.6%Language:Shell 0.4%