patrikhuber / 4dface

Real-time 3D face tracking and reconstruction from 2D video

Home Page:https://www.4dface.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Blendshapes of the 16k vertex model produce faulty shapes

wli75 opened this issue · comments

commented

Thank you for your effort in this repo! I've learnt a lot from it.

I was wondering if the blendshape is working as intended. In addition to writing a neutral face to an .obj file, I also wanted to write a face with expression to another file.
// neutral_expression and merged_mesh are defined in 4dface.cpp render::write_textured_obj(neutral_expression, "neutral.obj"); render::write_textured_obj(merged_mesh, "expression.obj");

The neutral face looks great, but not for the one with expression. I think it has something to do with the blendshape, but I'm not sure.
screen shot 2016-11-29 at 2 31 42 am
screen shot 2016-11-29 at 2 32 06 am

Hi!

I think you have a simple mistake somewhere. You can try two things:

  1. Use the new fitting function, which directly returns a mesh with the expression: link

  2. Make sure you generate your shape correctly, like here: link.

You can also check the value of your shape coefficients (should be somewhat between +-2) and blendshape coefficients ([0, 1]).

commented

Ah! I think my problem was with sfm_3448_edge_topology.json, but I'm using a model with 16759 vertices. Any chance there will be a sfm_16759_edge_topology.json file?

I don't think this can be because of the edge_topology file. But maybe.

I've uploaded the files for all resolution levels to the CVSSP download folder (the one where you got the model from). Do you still have access and can re-download?

commented

Yes, I've downloaded the new edge_topology file. You're right, it doesn't fix the problem, but I think it has something to do with 3448 vs 16759. When I load the 16759 morphable model, it gives me 55 principal components. Is this correct, or should it be 63 principal components?
cout << morphable_model.get_shape_model().get_num_principal_components() << endl;
Thank you for looking into this matter btw.

55 PC is fine. We retained 95% of the variance so all models have a slightly different number of PC, around 60 or so.

I think it's best if you proceed with the three things I outlined above. In addition if you can maybe try the code you have on the 3448 model and check whether it works or not - just to make sure there's nothing broken with the 16759 blendshapes, I don't really use the 16759 model much by myself but it should be alright.

commented

The 3448 model is working great, but I can't figure out why it's not working for the 16759 model.
I've attached a simple program (~100 lines) that I used for testing the models. It simply populates the shape and blendshape coefficients to some value within the range, and display the model.
If possible, could you take a quick look at it when you're free? I really appreciate it.
4dface.cpp.zip

Thanks for the minimal code example! Your code looks indeed perfectly fine. I think the expression_blendshapes_16759.bin file might be somehow broken, sorry about that! I'll try to find out more. Can you work with the lower or higher resolution model for now?

Okay, I just spent the last couple of hours investigating this. It's very odd. I double-checked the code which generates the reduced blendshapes and everything else I could think of, and all looks fine. All the blendshapes are generated by the same code, and it works correctly for all other model resolution levels.

This is how neutral and added surprise expression look like, with the 29k model:
neutral
surprise

And this is what happens to the 16k mesh when you add 0.2 * surprise:
mesh

The "pyramids" seem to appear because some vertices are "stuck" in neutral and don't seem to move along with the other vertices. I inspected these vertices, and all of the ones that are "stuck" have a vertex id of above around 5000 or so. Which means the lower vertex id's might move correctly, while the higher ones don't. It's just a theory, I'm not sure about any of this. I have absolutely no clue what could cause this.

Update: After looking at a neutral obj and a 0.2*surprise obj of the 16k model, I don't think only what I described in the last paragraph is going on. There is something else as well, maybe random, I am not sure.

I don't think I'll have time to investigate this further in the next months, as I will be busy with a deadline. Do you have a very compelling reason why you have to use the 16k model and can't use the 29k or 3448 model? Or do you feel like having a look at this problem by yourself?

@wli75: I think you deleted your post, at least I got an email but can't find it posted here :-)

Great that you can use the 3448 or 29k model for now - and I suppose you found the blendshapes as well (I've renamed them now actually).

Hope we can get to the bottom of this issue with the 16k blendshapes at some point.

commented

Yes, I found the 29k model after I've posted my previous comment, and then I frantically deleted it.
But thank you @Larumbergera for following up.
And thank you again @patrikhuber for the repo!

It looks like the blendshapes assume the vertices in the 16k model match the first 16k vertices from the 29k model, but they don't. (For the lower resolution models this just happened to be the case.)

The blendshapes for the 16k model will have to be trained from 16k face data, then all should work fine.

I can generate 16k face data, but don't have the code to train blendshapes. Also I don't know the file format. @patrikhuber can you help with this?

Hey @wpk-,

Aah, excellent find, nice! I think that must be the cause for these issues then.
I am pretty sure that the author of Resolution-Aware 3D Morphable Model told me that all lower resolution model vertices are subsets of the larger models. The registration algorithm also works this way, and the paper describes that in each level, the current mesh topology is upsampled with the 4-8 mesh subdivision algorithm (which would keep existing vertices and add new ones). So it's weird that the 16k model suddenly contains "new" vertices which are not a superset of lower levels.
We also wrote this down in our VISAPP paper and none of the co-authors ever hinted that this may be wrong for some models (it luckily only seems to affect the 16k model), so I am quite surprised by this discovery.

I'll send you the blendshapes training script.
Do you know how the 16k vertices are set up, are they a subset of the 29k, just in a different order, or completely different points?

Cheers!

It's quite subtle. The model is upsampled with said algorithm and a given threshold on the maximum triangle size (2 squared mm for the 29k model). Lower resolutions are created by setting the threshold higher. Intuitively you'd think this leaves you with a mesh that is at an "intermediate" resolution. However, because with the higher threshold some triangles are now not subdivided, the vertex order cannot be maintained. (So everything above is true, just not the assumption that vertices maintain order.)

The first 3448 triangles are created not with a threshold but with a fixed number of iterations that subdivide all triangles, so vertex order is maintained.

This is, by the way, also true for the 16k and 29k models: the first 3448 vertices are the same. Because eos can run on 3448 model, I'd hope that it will run fine with the 16k model, too. Just need to adapt the blendshapes.

Ah, I see! Cool! Thank you very much for that explanation! :-)

Yes, once we've got the correct blendshapes for the 16k model, everything should work just fine.

The blendshape file for the 16k model (expression_blendshapes_16759.bin) has been updated on the University of Surrey licence server. This should fix the issue.

@patrikhuber feel free to close this issue.

Thank you @wpk-! This is awesome.