ShenhanQian / GaussianAvatars

[CVPR 2024 Highlight] The official repo for "GaussianAvatars: Photorealistic Head Avatars with Rigged 3D Gaussians"

Home Page:https://shenhanqian.github.io/gaussian-avatars

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FLAME Fitting

seungeunlee-klleon opened this issue · comments

Dear authors,

Thank you for your great work.

I wonder which method you use for FLAME fitting. Did you try multi-view fitting on 3D keypoints for FLAME parameters?

Best Regards,

Hi, our tracking method is modified from VHA. It uses 3D landmarks predicted by face_alignment. However, we moved to 2D landmarks for their higher accuracy, especially for the mouth and eyes.

Thank you for your kind response!

Hi,

My understanding is that you used the setup from NeRSemble which has 16 cameras with (4 testing views).
Did you used a multi-view FLAME fitting of VHT using the 12 views? Or you fitted the FLAME params for each of the 16 views separately?

I appreciate your help.

Hi, we did multi-view fitting with all 16 views in NeRSemble.

Thanks for your prompt reply! Any chance to get the fitting code release, or any open-source repo you used for the multi-view fitting?

Hi, I've read the tracking for VHA and I wonder the specific loss utilized in your tracking framework. From the discussion above, I believe the 2d lmk loss is adopted for each view. I wonder whether the photometric loss is used with the flame texture like VHA. And also,
I wonder whether extra loss functions are introduced compared with VHA.

I would greatly appreciate it if some more parts about the tracking could be released besides the flame.py and lbs.py in the model (it's great if it can be hard to run directly, just for reference and understanding). But it might be enough great if you just show more about the tracking loss.

Again thanks for your great work!

Hi, we didn't use FLAME's PCA texture. Instead, we optimize a texture map for each subject. This allows us to capture arbitrary colors of hair, skin, and collars occluding the neck.

Based on your suggestion, we have added some reference code for head tracking here.

Very thanks!!!
I will check the reference code released. Thanks again for your kind response and the sharing code.

Very thanks!!! I will check the reference code released. Thanks again for your kind response and the sharing code.

UPDATE: I understand most of the tracking part after reading the codes. A little bit more question about the tex_paint used.: Do we need to optimize this part or it needs the preprocessing to "paint" it.

(it might be useful to update the flame and lbs.py for the tracking part [FlameTexPaint and Uvs for integrity] for running but if you do not have the plan, I believe it's OK also.)

We manually paint the mean texture map of FLAME and use it as the initial texture map during tracking. You can find it here.

Thanks for your helpful response. I appreciate your assistance!