594270461 / INSTA

INSTA - Instant Volumetric Head Avatars [CVPR2023]

Home Page:https://zielon.github.io/insta/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

INSTA - Instant Volumetric Head Avatars

Wojciech Zielonka, Timo Bolkart, Justus Thies

Max Planck Institute for Intelligent Systems, Tübingen, Germany

Video  Paper  Project Website  Dataset  Face Tracker  INSTA Pytorch  Email


Official Repository for CVPR 2023 paper Instant Volumetric Head Avatars

This repository is based on instant-ngp, some of the features of the original code are not available in this work. Therefore, one should restrain the program options to the main menu only.

⚠ We also prepared a Pytorch demo version of the project INSTA Pytorch 

Installation

The repository is based on instant-ngp commit. The requirements for the installation are the same, therefore please follow the guide. Remember to use the --recursive option during cloning.

git clone --recursive https://github.com/Zielon/INSTA.git
cd INSTA
cmake . -B build
cmake --build build --config RelWithDebInfo -j

Usage and Requirements

After building the project you can either start training an avatar from scratch or load a snapshot. For training, we recommend a graphics card higher or equal to RTX3090 24GB, (we have not tested any other GPU) and 64 GB of RAM memory. Rendering from a snapshot does not require a high-end GPU and can be performed even on a laptop. We have tested it on RTX 3080 8GB laptop version.

The viewer options are the same as in the case of instant-ngp, with some additional key F to raycast the FLAME mesh.

Usage example

# Training
./build/rta --config insta.json --scene data/obama --height 1024 --width 1024

# Loading from a checkpoint
./build/rta --config insta.json --scene data/obama --height 1024 --width 1024 --snapshot data/obama/snapshot.msgpack

Dataset and Training

We are releasing part of our dataset together with publicly available preprocessed avatars from NHA, NeRFace and IMAvatar. The output of the training (Record Video in menu), including rendered frames, checkpoint, etc will be saved in the ./data/{actor}/experiments/{config}/debug. After the specified number of steps, the program will automatically either render all videos with the All option or only the currently selected one in Mode.

Available avatars. Click the selected avatar to download the training dataset and the checkpoint. The avatars have to be placed in the data folder.

Dataset Generation

For the input generation, a conda environment is needed, and a few other repositories. Simply run install.sh from scripts folder to prepare the workbench.

Next, you can use Metrical Photometric Tracker for the tracking of a sequence. After the processing is done run the generate.sh script to prepare the sequence. As input please specify the absolute path of the tracker output.

For training we recommend at least 1000 frames.

# 1) Run the tracker for a selected actor
python tracker.py --cfg ./configs/actors/duda.yml

# 2) Generate a dataset using the script. Importantly, use the absolute path to tracker input and desired output.
./generate.sh /metrical-tracker/output/duda INSTA/data/duda 100

# ./generate.sh {input} {output} {# of test frames from the end}

Citation

If you use this project in your research please cite INSTA:

@proceedings{INSTA:CVPR2023,
  author = {Zielonka, Wojciech and Bolkart, Timo and Thies, Justus},
  title = {Instant Volumetric Head Avatars},
  journal = {Conference on Computer Vision and Pattern Recognition},
  year = {2023}
}

About

INSTA - Instant Volumetric Head Avatars [CVPR2023]

https://zielon.github.io/insta/

License:Other


Languages

Language:C 44.4%Language:C++ 31.2%Language:Cuda 19.4%Language:Python 2.8%Language:CMake 1.1%Language:HTML 0.5%Language:PowerShell 0.2%Language:Shell 0.2%Language:Makefile 0.1%Language:Lua 0.0%Language:TeX 0.0%Language:JavaScript 0.0%Language:Batchfile 0.0%