zhan-xu / MoRig

Code for SIGGRAPH ASIA 2022 paper "Morig: Motion-Aware Rigging of Character Meshes from Point Clouds"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to evaluate on DeformingThings4D

rhebbalaguppe opened this issue · comments

Can you share a demo script to evaluate on DeformingThings4D dataset?
Or can you explain the changes that need to be made to the commands or files?
Also, can you share the filenames from ModelsResources dataset shown in the main paper

Hi do you mean the second row of figure 1? I think that's the only example driven by animation from DeformingThings.

Hi, Thanks for sharing the code and training model.
The following are unclear:

  • How to qualitatively evaluate/visualise results on DeformingThings4D ?
    • I have extracted results from steps 7 & 8. But that just produces *_{pred_flow,src_vtx,tar_pts,tar_vtx}.npy
    • It is unclear how to extract vertex trajectory, skeleton, skinning from it.
    • Also, what changes need to be made evaluate/visualize_tracking.py for DeformingThings4D ?
  • How to reproduce qualitative results for:
  • For custom data can you provide a brief overview of the steps before the demo script is shared:
    • Preprocessing point cloud sequence and mesh before running MoRig
    • How to create a custom dataloader ?
    • Order in which models need to be ran
    • How to visualize each steps (for debugging)

Hello,
1.
1.1-1.2 For DeformingThings4D, we only use it to train the correspondence and deformation modules. We didn't evaluate the rigging and animation steps on it. You can optionally evaluate the deformation performance, i.e., scene flow prediction, on it. To do this, as your already did, you can add pred_flow to src_vtx, and compare with tar_vtx with MSE.
1.3 visualize_tracking.py is used to visualize the final animation after IK, so no need to use this.
2.
2.1 I think your videos look good. Seems the predicted deformation more or less similar to the GT deformation. Do you mean the animation in our video demo? For our demo, we use motions from Mixamo because they look more natural. The motions you've seen here are our synthetic motion.
2.2 Fig-1 bottom row: Input mesh is goatS4J6Y, Motion is bucksYJL_GetHit2.
2.3 Similar to RigNet, we use open3D 0.9.0 to get the simplified mesh as below
mesh_simplify = mesh.simplify_quadric_decimation(5000)
3.
3.1 We use pyrender to simulate partial scan and synthesize point cloud sequence. The number of points per frame is constrained to 1K. You can take a look at my unorganized headless-scan rendering script to get a sense about this process here. To process mesh, similar to RigNet, we just simplify them by "simplify_quadric_decimation" to have vertices between 1K-5K.
3.2 dataloader is bit relied on pytorch-geometric library, especially the batching mechanism. You might take a look at the library as well as the scripts in "datasets" folder.
3.3 If the shape of the reference character in the point cloud is different from the target mesh, you will need to first align the mesh to the shape of reference character. The checkpoint deform_s_mr is trained for this. train_deform_shape.py is the script to train it. You can use it to output flow to align these two.
If the shapes of the reference character and target mesh are the same, you will need to use deform_p_mr to get vertex trajectories first, and then use jointnet, masknet, skinnet, rootnet, bonenet to get the rig.The steps to get rig is similar to RigNet. I will upload a demo script with those steps.
3.4 We use open3D to visualize. There are some visualization scripts in evaluate folder for some steps. Some visualization functions are in utils/vis_utils.py.

Hi,
Thanks for the prompt reply.
2. Yes, the animation in the video demo. Can you share the procedure to retarget mixamo animation to the meshes shown in the video ? Or can you share those files ?
3. I will try creating the a demo script for custom input and will inform if I face any issues.

yes, mixamo has automatic motion retargeting, but for humanoid only. That's why we mostly show humanoid animation in the demo. You can try "upload character" in mixamo website.

BTW, I think there is a typo in the command python -u training/train_rig.py
--arch="jointnet_motion"
--train_folder="$DATASET_PATH/ModelsReources/train/"
--val_folder="DATASET_PATH/ModelsReources/val/"
--test_folder="DATASET_PATH/ModelsReources/test/"
--train_batch=4 --test_batch=4
--logdir="logs/jointnet_motion"
--checkpoint="checkpoints/jointnet_motion"
--lr=5e-4 --schedule 40 80 --epochs=120 - shouldn't the foldername be ModelsResources ?

thank you. Corrected.

Thank you for providing the paper and the code for training. I would like to inquire about the preprocessing of .ply files to extract the necessary data for rigging and tracking.

Upon reviewing the code, I noticed that it utilizes the _vtx_traj.npy and _pts_traj.npy files as sources for trajectories. I am interested in reproducing the results using real data on a prepared mesh.

It would be greatly appreciated if you could share your code for generating the complete dataset.
Thank you for your assistance.