radekd91 / emoca

Official repository accompanying a CVPR 2022 paper EMOCA: Emotion Driven Monocular Face Capture And Animation. EMOCA takes a single image of a face as input and produces a 3D reconstruction. EMOCA sets the new standard on reconstructing highly emotional images in-the-wild

Home Page:https://emoca.is.tue.mpg.de/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to preprocess the training dataset?

WYao02 opened this issue · comments

commented

Hi Radek,

Thanks for your amazing work!
I saw you used DECA training data and affectnet data.
I'm wondering if you could provide how to preprocess the dataset like vggface2? In DECA, they said they are using
FAN to predict 68 2D landmark
face_segmentation to get skin mask.
But many details are not written, such as whether we need to crop and align the image, what size we need to use for getting landmark and skin mask... Would you please provide a way for us to preprocess the training data?

Thank you!

By the way, vggface2 is still available for academics at this link: https://academictorrents.com/details/535113b8395832f09121bc53ac85d7bc8ef6fa5b/tech&filelist=1
But the VoxCeleb2(https://www.robots.ox.ac.uk/~vgg/data/voxceleb/vox2.html) seems not available...

Were you able to train DECA?