lizhe00 / PoseVocab

Code of [SIGGRAPH 2023] "PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling"

Home Page:https://lizhe00.github.io/projects/posevocab/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling

SIGGRAPH 2023

Zhe Li, Zerong Zheng, Yuxiao Liu, Boyao Zhou, Yebin Liu

Tsinghua Univserity

Introduction

We propose PoseVocab, a novel pose encoding method that encodes dynamic human appearances under various poses for human avatar modeling.

teaser.mp4

Installation

Clone this repo, then run the following scripts.

cd ./utils/posevocab_custom_ops
python setup.py install
cd ../..

SMPL-X & Pretrained Models

Run on THuman4.0 Dataset

Dataset Preparation

  • Download THuman4.0 dataset. Let's take "subject00" as an example, and denote the root data directory as SUBJECT00_DIR.
  • Specify the data directory and training frame list in gen_data/main_preprocess.py, then run the following scripts.
cd ./gen_data
python main_preprocess.py
cd ..

Training

Note: In the first training stage, our method reconstructs depth maps for the depth-guided sampling in the next stages. If you want to skip the first stage, you can download our provided depth maps from this link, unzip it to SUBJECT00_DIR/depths, and directly run python main.py -c configs/subject00.yaml -m train until the network converges.

python main.py -c configs/subject00.yaml -m train
  • Stage 2: render depth maps.
python main.py -c configs/subject00.yaml -m render_depth_sequences
python main.py -c configs/subject00.yaml -m train

Testing

Download testing poses from this link, unzip them to somewhere, denoted as TESTING_POSE_DIR.

  • Specify prev_ckpt in configs/subject00.yaml#L78 as the pretrained model ./pretrained_models/subject00 or the trained one by yourself.
  • Specify data_path in configs/subject00.yaml#L60 as the testing pose path, e.g., TESTING_POSE_DIR/thuman4/pose_01.npz.
  • Run the following script.
python main.py -c configs/subject00.yaml -m test
  • The output results can be found in ./test_results/subject00.

License

MIT License. SMPL-X related files are subject to the license of SMPL-X.

Citation

If you find our code or paper is useful to your research, please consider citing:

@inproceedings{li2023posevocab,
  title={PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling},
  author={Li, Zhe and Zheng, Zerong and Liu, Yuxiao and Zhou, Boyao and Liu, Yebin},
  booktitle={ACM SIGGRAPH Conference Proceedings},
  year={2023}
}

About

Code of [SIGGRAPH 2023] "PoseVocab: Learning Joint-structured Pose Embeddings for Human Avatar Modeling"

https://lizhe00.github.io/projects/posevocab/


Languages

Language:Python 79.9%Language:Cuda 17.4%Language:C++ 2.5%Language:C 0.2%