NetEase-GameAI / Face2FaceRHO

The Official PyTorch Implementation for Face2Face^ρ (ECCV2022)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Test custom image but the result is poor

Barrett-python opened this issue · comments

When I used a face as the source image and a video to drive, the results were poor, especially in places where the source image and video were not aligned, such as side faces. What could cause this?Thanks!

  1. Currently we require the input image is square. And the pre-train model is for images with a resolution of 512*512. Check if you use these settings.
    2.Although we don't need the driving and source face are exactly aligned like X2Face, we will need the two faces are roughly aligned with each other. We recommend that you follow the preprocessing steps of FOMM to preprocess the inputs. It’s important to note that compared to X2Face, we obtain more natural videos where faces move freely within the bounding box.
  2. Our method support changing the head pose around 30°–45° in either roll, pitch, or yaw. Notice that the most recent works [7,43] that focus on pose editing typically only allow pose changes
    up to around 30°, so such range of pose variance can already be considered as large in one-shot face reenactment. If the head pose difference between the source and driving image is too large, the resulting quality may drop.

Thank you for your reply!