jby1993 / SelfReconCode

This repository contains a pytorch implementation of "SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video (CVPR 2022, Oral)".

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video

This repository contains a pytorch implementation of "SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video (CVPR 2022, Oral)".
Authors: Boyi Jiang, Yang Hong, Hujun Bao, Juyong Zhang.

This code is protected under patent, and it can be only used for research purposes. For commercial uses, please send email to jiangboyi@idr.ai.

Requirements

  • Python 3
  • Pytorch3d (0.4.0, some compatibility issues may occur in higher versions of pytorch3d)

Note: A GTX 3090 is recommended to run SelfRecon, make sure enough GPU memory if using other cards.

Install

conda env create -f environment.yml
conda activate SelfRecon
bash install.sh

It is recommended to install pytorch3d 0.4.0 from source.

wget -O pytorch3d-0.4.0.zip https://github.com/facebookresearch/pytorch3d/archive/refs/tags/v0.4.0.zip
unzip pytorch3d-0.4.0.zip
cd pytorch3d-0.4.0 && python setup.py install && cd ..

To download the SMPL models from here and move pkls to smpl_pytorch/model.

Run on PeopleSnapshot Dataset

The preprocessing of PeopleSnapshot is described here. If you want to optimize your own data, you can run VideoAvatar to get the initial SMPL estimation, then follow the preprocess. Or, you can use your own SMPL initialization and normal prediction method then use SelfRecon to reconstruct.

Preprocess

Download the Dataset and unzip it to some ROOT. Run the following code to extract data for female-3-casual, for example.

python people_snapshot_process.py --root $ROOT/people_snapshot_public/female-3-casual --save_root $ROOT/female-3-casual

Extract Normals

To enable our normal optimization, you have to install PIFuHD and Lightweight Openpose in your $ROOT1 and $ROOT2 first. Then copy generate_normals.py and generate_boxs.py to $ROOT1 and $ROOT2 seperately, and run the following code to extract normals before running SelfRecon:

cd $ROOT2
python generate_boxs.py --data $ROOT/female-3-casual/imgs
cd $ROOT1
python generate_normals.py --imgpath $ROOT/female-3-casual/imgs

Then, run SelfRecon with the following code, this may take one day to finish:

CUDA_VISIBLE_DEVICES=0 python train.py --gpu-ids 0 --conf config.conf --data $ROOT/female-3-casual --save-folder result

The results locate in $ROOT/female-3-casual/result

Inference

Run the following code to generate rendered meshes and images.

CUDA_VISIBLE_DEVICES=0 python infer.py --gpu-ids 0 --rec-root $ROOT/female-3-casual/result/ --C

Texture

This repo provides a script to utilize VideoAvatar to extract the texture for the reconstructions of SelfRecon.

First, You need to install VideoAvatar and copy texture_mesh_extract.py to its repository path.

Then, after performing inference for $ROOT/female-3-casual/result, you need to simplify and parameterize the template mesh tmp.ply yourself, then save the result mesh as $ROOT/female-3-casual/result/template/uvmap.obj. And run the following code to generate the data for texture extraction:

CUDA_VISIBLE_DEVICES=0 python texture_mesh_prepare.py --gpu-ids 0 --num 120 --rec-root $ROOT/female-3-casual/result/

Finally, go to VideoAvatar path, and run the following code to extract texture:

CUDA_VISIBLE_DEVICES=0 python texture_mesh_extract.py --tmp-root $ROOT/female-3-casual/result/template

Dataset

The processed dataset, our trained models, some reconstruction results and textured meshes can be downloaded via the link, you can download and unzip some smartphone data, like CHH_female.zip, in $ROOT, and train directly with:

CUDA_VISIBLE_DEVICES=0 python train.py --gpu-ids 0 --conf config.conf --data $ROOT/CHH_female --save-folder result

And you can unzip CHH_female_model.zip in $ROOT/CHH_female and run:

CUDA_VISIBLE_DEVICES=0 python infer.py --gpu-ids 0 --rec-root $ROOT/CHH_female/trained/ --C

to check the results of our trained model in $ROOT/CHH_female/trained.

Note: If you want to train the synthetic data, config_loose.conf is prefered.

Acknowledgement

Here are some great resources we benefit or utilize from:

This research was supported by National Natural Science Foundation of China (No. 62122071), the Youth Innovation Promotion Association CAS (No. 2018495), ``the Fundamental Research Funds for the Central Universities''(No. WK3470000021).

Citation

@inproceedings{jiang2022selfrecon,
  author    = {Boyi Jiang and Yang Hong and Hujun Bao and Juyong Zhang},
  title     = {SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video},
  booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2022}
}

License

For non-commercial research use only.

About

This repository contains a pytorch implementation of "SelfRecon: Self Reconstruction Your Digital Avatar from Monocular Video (CVPR 2022, Oral)".


Languages

Language:Python 57.3%Language:Cuda 37.1%Language:C++ 5.5%Language:Shell 0.0%