lilujunai / ECON

[CVPR 2023] ECON: Explicit Clothed humans Obtained from Normals

Home Page:https://xiuyuliang.cn/econ

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

ECON: Explicit Clothed humans Obtained from Normals

Yuliang Xiu · Jinlong Yang · Xu Cao · Dimitrios Tzionas · Michael J. Black

CVPR 2023

Logo


PyTorch Lightning cupy Twitter

Google Colab Docker Blender

Paper PDF Project Page youtube views


ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with loose clothing or in challenging poses. ECON also supports multi-person reconstruction and SMPL-X based animation.

News 🚩

TODO

  • Blender add-on for FBX export
  • Full RGB texture generation

Table of Contents
  1. Instructions
  2. Demo
  3. Applications
  4. Citation

Instructions

Demo

# For single-person image-based reconstruction (w/ l visualization steps, 1.8min)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results

# For multi-person image-based reconstruction (see config/econ.yaml)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi

# To generate the demo video of reconstruction results
python -m apps.multi_render -n <filename>

# To animate the reconstruction with SMPL-X pose parameters
python -m apps.avatarizer -n <filename>

More Qualitative Results

OOD Poses
Challenging Poses
OOD Clothes
Loose Clothes

Applications

SHHQ crowd
ECON could provide pseudo 3D GT for SHHQ Dataset ECON supports multi-person reconstruction


Citation

@inproceedings{xiu2023econ,
  title     = {{ECON: Explicit Clothed humans Obtained from Normals}},
  author    = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2023},
}

Acknowledgments

We thank Lea Hering and Radek Daněček for proof reading, Yao Feng, Haven Feng, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study.

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).

Contributors

Kudos to all of our amazing contributors! ECON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.

Contributor avatars are randomly shuffled.



License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a part-time employee of Meshcapade, his research was performed solely at, and funded solely by, the Max Planck Society.

Contact

For technical questions, please contact yuliang.xiu@tue.mpg.de

For commercial licensing, please contact ps-licensing@tue.mpg.de

About

[CVPR 2023] ECON: Explicit Clothed humans Obtained from Normals

https://xiuyuliang.cn/econ

License:Other


Languages

Language:Python 96.1%Language:Cuda 2.0%Language:C++ 0.9%Language:C 0.4%Language:Cython 0.4%Language:Shell 0.2%