oawiles / X2Face

Pytorch code for ECCV 2018 paper

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Crop size for source and driving image

Blade6570 opened this issue · comments

Hi, thank you for releasing the pre-trained model. While I was testing on other videos, the crop size really matters for the quality of results. I tried to crop the faces from the video by randomly changing the rectangle size and chose the one which provides a reasonable result. Could you please mention the exact crop size that you used after detecting faces from dlib? it will be of great help.

Hi. We got the data from someone else: (which is linked from our website --http://www.robots.ox.ac.uk/~vgg/research/unsup_learn_watch_faces/x2face.html). Unfortunately you'll have to see what they say.

From a quick browse, I believe they say this in their paper:
"Since in both datasets, the specified face regions yield a tight face crop,
we expand all crops by a factor of ⇥1.6 to incorporate additional context
into the face region."