aurelianocyp / AniFaceGAN

This is a pytorch implementation of the following paper: AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars, NeurIPS 2022 (Spotlight).

Home Page:https://yuewuhkust.github.io/AniFaceGAN/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Enviroment

The code is tested in the docker environment: yuewuust/pytorch1.11.0_nviffrast:v11 Please refer to Link.

We use the Pytorch 1.11.0 version.

3090 py3.8 ubuntu20.04 cuda11.3

其他库自己配就是了,版本没多大问题。别用那个docker。

Test

The expression coefficients are extracted by Deep3DFaceRecon. And we provide the smile expression ./mat/01626.mat as an example. Zero expression is defined as a neutral face.

Run the

./render.sh

and the rendered multiview images and videos will be sorted in ./multiview_imgs/.

To do

  • Release inference code
  • Release pretrained checkpoints
  • Clean up code.
  • Add detailed instrunctions.

记录

要生成自己的mat对应的头部姿态和表情参数时将render.sh里面的render.py改为render_mine.py就是了

About

This is a pytorch implementation of the following paper: AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars, NeurIPS 2022 (Spotlight).

https://yuewuhkust.github.io/AniFaceGAN/


Languages

Language:Python 99.7%Language:Shell 0.3%