Meowuu7 / QuasiSim

[ECCV'24] Parameterized Quasi-Physical Simulators for Dexterous Manipulations Transfer

Home Page:https://meowuu7.github.io/QuasiSim/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issues with File Availability, Script Contents, and Performance

KailinLi opened this issue · comments

Amazing ideasđź‘Ť and really solid experimentsđź‘Ť!

I'm working with your open-source code and have a few questions:

  • It appears that three required files are missing: ./assets/nearest_dyn_verts_idxes.npy ckpts/grab/102/diffhand_act.npy, and ckpts/grab/102/retargeted_shadow.pth. Could you possibly provide these or guide me on how to generate them?
  • I notice that the contents of scripts_new/train_grab_mano_wreact.sh and scripts_new/train_grab_mano_wreact_optacts.sh seem identical. Is this intentional, or might there be a mix-up?
  • Could you share the specifications of the GPU used for your experiments? I encounter memory overflows on a 24GB 4090 GPU even after reducing the batch size to 8, when running bash scripts_new/train_grab_pointset_points_dyn_retar_pts.sh.
  • Regarding the time consumption, each script on a single 4090 GPU takes approximately 100 hours for optimization. For example, train_grab_mano.sh requires about 93 hours. Is this normal, or could I have something set up wrong? This means that running all the scripts would take over a month.

Thanks a lot for your help!

Best

Hello @KailinLi ,

Thank you for your interest in our work! We're grateful for your feedback and for bringing these issues to our attention.

  • Initially, ./assets/nearest_dyn_verts_idxes.npy was not tracked by git, but I have now added it. Additionally, missing checkpoints have been included and can be downloaded here.
  • The script scripts_new/train_grab_mano_wreact_optacts.sh has been corrected to utilize the configuration dyn_grab_pointset_mano_dyn_optacts.conf.
  • scripts_new/train_grab_pointset_points_dyn_retar_pts.sh indeed requires a large amount of GPU memory. We conduct this part of the experiment using A800 GPUs with 80GB. Additionally, there is an alternative optimization suite available that requires a smaller GPU memory size. Please refer to the updated README for further details.
  • The first optimization stage demands substantial human effort, involving monitoring the optimization process and transitioning between stages. Since the optimal iteration for each stage can vary across different sequences, we initially set a high maximum optimization iteration value. To streamline the process, we monitor the optimization stage closely. Once the corresponding loss has converged to a satisfactory level, we proceed to switch to the next optimization step without waiting for reaching the maximum optimization iteration. For the example currently included in the repository, reducing the total optimization iteration to a smaller value suffices, and I've adjusted this setting accordingly. Although this parameter is currently hard-coded in the code, I plan to control it through configuration files in the future.
    I do not have enough time today to thoroughly test whether these adjusted optimization iterations are adequate. I'll test them over the next several days and provide the runtime for each step. Stay tuned for updates!

Thank you again, and best regards.

Thank you so much for your swift and helpful response!

Most of my issues are resolved, and I’ve managed to get all the code running in a reasonable amount of time. I truly appreciate your support.

Looking forward to those updates you mentioned!

Best,