custom-humans / editable-humans

[CVPR 2023] Learning Locally Editable Virtual Humans

Home Page:https://custom-humans.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

About Generalization

MinJunKang opened this issue · comments

Hello,

Thanks for sharing great work!

I have three questions about your work.

  1. Does this method generalizes to novel clothes? (which might not be included in the training dataset)
  2. If not, How can I generate a new codebook for the novel clothes?
  3. If I have 3D avatar mesh with thin clothes, is it possible to create an avatar with thick clothes? Do I have to do GAN inversion to find the latent space of these clothes?

Thanks for your interest in our work.

To answer your questions:

  1. Yes, the model is generalizable to unseens scans. For instance, the demo mesh is unseen in the training set. During inference, we freeze the trained decoders and fit a new codebook to the unseen scan.

  2. The most promising way to generate a new codebook is fitting a 3D mesh. It is also possible to do a PCA random sampling or interpolation using the 100 training codebooks, but the quality is not guaranteed.

  3. Not sure what you mean by thin and thick here, are they mesh thickness? Is your 3D avatar a single water-tight mesh? Or do you have another clothing mesh draping on your body mesh?

Our work can only handle water-tight mesh in practice. First, you need to fit two codebooks to both your thin and thick clothing. Since these codebooks are stored following the SMPL-X vertex topology, you only need to know the SMPL-X vertex indices where your clothing covers. You can uncomment Lines 58~67 in demo.py and see how clothing transfer works here.

Please let me know if you still have any questions.
Thanks

Thanks for your kind and detailed answer.

I want to further ask the detail about your answers to my question.

For question 2, should I prepare a 3D mesh with the corresponding color to make a new codebook? How about using a single image of clothes or a single human wearing the target clothes? Is this a possible scenario?

For question 3, sorry for the lack of details, as you mentioned, I want to change the water-tight clothes to non-water-tight clothes which will have their own thickness compared to SMPL-X. Is this possible?

Thanks!

And another question, which is totally different from the previous one.

To test your pre-trained model, I run the demo.py with mesh-f00041.obj samples of your custom "CustomHumans" dataset.

I didn't change the feature codebook of 32, and uncommented the this line to produce rendered 2D image corresponding to the feature codebook of 32.

After running the code, the result seems badly broken.
What can be the possible reason for this?

Also tested on another sample, part of Thuman 2.0
그림7

I used smpl_mesh in line as given smpl mesh in data folder. Should I use another one?

Let me answer your question first:

For question 2, should I prepare a 3D mesh with the corresponding color to make a new codebook? How about using a single image of clothes or a single human wearing the target clothes? Is this a possible scenario?

A 3D textured mesh is the most straightforward way. Getting a 3D mesh from a single image is not trivial. You may try PIFuHD or ECON but their mesh quality is also limited.

For question 3, sorry for the lack of details, as you mentioned, I want to change the water-tight clothes to non-water-tight clothes which will have their own thickness compared to SMPL-X. Is this possible?

This problem also seems to be very challenging. Most 3D scans only have a single layer of vertex. Only synthetic data can have thickness. A possible solution is doing garment registration on the real scans but this also requires many efforts.

그림6

Apparently, your input files are not correct. Please make sure you have the correct input mesh and SMPLX registration in Line 36. They should look like below. BTW self-contact might also slightly affect the fitting results

Screen Shot 2023-07-24 at 5 08 23 PM

Screen Shot 2023-07-24 at 5 08 33 PM

Also tested on another sample, part of Thuman 2.0 그림7

I used smpl_mesh in line as given smpl mesh in data folder. Should I use another one?

Have you checked if your TUHAM scan and SMPL-X mesh are aligned? I also tried to fit a random THUMAN scan (cannot find the subject in your image) using the provided checkpoint and it works well.

Input:

Screen Shot 2023-07-24 at 5 33 33 PM

Fitting reconstruction:

Screen Shot 2023-07-24 at 5 37 53 PM

Thanks for spending your valuable time answering my question.
For the first sample, this is apparently my fault not to align smpl-x with scanned 3D. I checked the alignment on another scene of CustomHumans and tested it. Worked perfectly.

For the second sample, I double-checked the smpl-x alignment with scanned 3D but it doesn't have a problem. Actually, I am testing the 0525 object of the Thuman2.0 Dataset. It seems that you have thuman_smpl_mesh.pkl in your data directory which is different from smpl_mesh.pkl. What is the difference between these two meshes? I think this point is making problem.

I've also tested this subject without any problem. Therefore, I guess your SMPLX mesh has a different topology.
Here I attach the obj file I used and you can compare it with yours.

Screen Shot 2023-07-24 at 7 36 44 PM

0525_smplx.obj.zip

Sorry, I found the issue. I should rescale my scans to be in -1 ~ 1 range following your instruction. Given smplx model from yours and mine has a different scale. Even though the real 3D scan and SMPL are aligned, this causes the problem.
image
Thanks for your kind reply.
Actually, I was really surprised by finding this work because I was thinking of the same task with a human avatar. This should be a great baseline for me to start. I am not sure if I can complete this project, but if I do, I will definitely cite your paper. Thanks for your hard work!