gafniguy / 4D-Facial-Avatars

Dynamic Neural Radiance Fields for Monocular 4D Facial Avater Reconstruction

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to do facial reenactment?

sunshineatnoon opened this issue · comments

Hi, Thanks for open-sourcing this awesome work. I am trying to re-implement the facial reenactment shown in Fig.5 in the paper. Could you please let me know how to do that? Thanks!

Hey, if you use the expressions and poses from one actor with a trained model of another, you will basically get reenactment.
There are a few things to be careful with:

  • provide the correct background image you want to use
  • the expression and face identity coefficients are quite entangled (this is inherent in the 3DMM face model), so instead of using the expressions themselves, it looks better if you just transfer the 'expression delta' from one person's "neutral expression" to the other person's "netural expression". This logic for this is in real_to_nerf.py.
  • If you go for head angles that are beyond the angles in the training of the model, it obviously won't look good.

Thanks for the reply, is there any plan to release related code in the near future?

Hi, I tried to implement the facial reenactment and got the results below. But they don't look as neat as Fig.7 in the paper.

I use person_1 as target and person_2 as driving. I added custom_seq_driving before line 367 in eval_transformed_rays.py to compute the transferred expressions and poses. Then I use the obtained expressions and poses inside the for loop.

I tried either transferring the raw expressions or the expression deltas (where I manually choose frame 973 in person_1 and frame 990 in person_2 as neutral expression).

out.mp4
transferred.mp4

Any help and suggestion will be appreciated. Thanks for your time!

Hi, I tried to implement the facial reenactment and got the results below. But they don't look as neat as Fig.7 in the paper.

I use person_1 as target and person_2 as driving. I added custom_seq_driving before line 367 in eval_transformed_rays.py to compute the transferred expressions and poses. Then I use the obtained expressions and poses inside the for loop.

I tried either transferring the raw expressions or the expression deltas (where I manually choose frame 973 in person_1 and frame 990 in person_2 as neutral expression).

Any help and suggestion will be appreciated. Thanks for your time!

Hi Thanks for sharing the solution. But I meet the index out of bound error when I add code before line 367. Could you share more information how should I change the code? Thanks.
_, posesD, _, _, _, expressionsD, _, _ = load_flame_data(
"nerface_dataset/person_2",
half_res=cfg.dataset.half_res,
testskip=cfg.dataset.testskip, test=True) #i_train, i_val, i_test = i_split
i_test = i_split
rigid_poses_driving = posesD[i_test].float().to(device)
expressions_driving = expressionsD[i_test].float().to(device)
render_expressions,render_poses = custom_seq_driving(rigid_poses_driving,render_poses,expressions_driving,render_expressions)

Hi, I tried to implement the facial reenactment and got the results below. But they don't look as neat as Fig.7 in the paper.
I use person_1 as target and person_2 as driving. I added custom_seq_driving before line 367 in eval_transformed_rays.py to compute the transferred expressions and poses. Then I use the obtained expressions and poses inside the for loop.
I tried either transferring the raw expressions or the expression deltas (where I manually choose frame 973 in person_1 and frame 990 in person_2 as neutral expression).

Any help and suggestion will be appreciated. Thanks for your time!

Hi Thanks for sharing the solution. But I meet the index out of bound error when I add code before line 367. Could you share more information how should I change the code? Thanks. _, posesD, _, _, _, expressionsD, _, _ = load_flame_data( "nerface_dataset/person_2", half_res=cfg.dataset.half_res, testskip=cfg.dataset.testskip, test=True) #i_train, i_val, i_test = i_split i_test = i_split rigid_poses_driving = posesD[i_test].float().to(device) expressions_driving = expressionsD[i_test].float().to(device) render_expressions,render_poses = custom_seq_driving(rigid_poses_driving,render_poses,expressions_driving,render_expressions)

See details #57 and #37

commented

Hi, I tried to implement the facial reenactment and got the results below. But they don't look as neat as Fig.7 in the paper.
I use person_1 as target and person_2 as driving. I added custom_seq_driving before line 367 in eval_transformed_rays.py to compute the transferred expressions and poses. Then I use the obtained expressions and poses inside the for loop.
I tried either transferring the raw expressions or the expression deltas (where I manually choose frame 973 in person_1 and frame 990 in person_2 as neutral expression).

Any help and suggestion will be appreciated. Thanks for your time!

Hi Thanks for sharing the solution. But I meet the index out of bound error when I add code before line 367. Could you share more information how should I change the code? Thanks. _, posesD, _, _, _, expressionsD, _, _ = load_flame_data( "nerface_dataset/person_2", half_res=cfg.dataset.half_res, testskip=cfg.dataset.testskip, test=True) #i_train, i_val, i_test = i_split i_test = i_split rigid_poses_driving = posesD[i_test].float().to(device) expressions_driving = expressionsD[i_test].float().to(device) render_expressions,render_poses = custom_seq_driving(rigid_poses_driving,render_poses,expressions_driving,render_expressions)

Have you solved this problem yet?