NVIDIA / warp

A Python framework for high performance GPU simulation and graphics

Home Page:https://nvidia.github.io/warp/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cannot use `wp.Tape` with `FeatherstoneIntegrator`

etaoxing opened this issue · comments

I'm wrapping wp.Tape() inside a torch.autograd.Function, and example_cloth_throw.py works after switching the integrator to FeatherstoneIntegrator (same gradient values). Then I add an articulation into the scene, and I'm getting a segfault. Looks like tape.backward() fails when trying to compute the kernel adjoint for eval_dense_solve_batched (see here). Any quick fix for this?

<warp.context.Kernel object at 0x131c15d00> integrate_particles
<warp.context.Kernel object at 0x132c46850> eval_body_inertial_velocities
<warp.context.Kernel object at 0x132b924c0> eval_rigid_id
<warp.context.Kernel object at 0x132b66310> eval_rigid_fk
<warp.context.Kernel object at 0x132c43a00> integrate_generalized_joints
<warp.context.Kernel object at 0x132c2abb0> eval_dense_solve_batched
Fatal Python error: Segmentation fault

Thread 0x000000016d70f000 (most recent call first):
<no Python frame>

Current thread 0x00000001e4f3dec0 (most recent call first):

  File "../lib/python3.8/site-packages/warp/context.py", line 4251 in launch
  File "../lib/python3.8/site-packages/warp/tape.py", line 140 in backward
  File "./example_cloth_throw.py", line 127 in backward
  File "..lib/python3.8/site-packages/torch/autograd/function.py", line 274 in apply
  File "..lib/python3.8/site-packages/torch/autograd/__init__.py", line 200 in backward
  File "../lib/python3.8/site-packages/torch/_tensor.py", line 487 in backward
  File "./example_cloth_throw.py", line 386 in step1
  File "./example_cloth_throw.py", line 445 in <module>
  File "../lib/python3.8/runpy.py", line 87 in _run_code
  File "../lib/python3.8/runpy.py", line 194 in _run_module_as_main
Segmentation fault (core dumped)

Hi @etaoxing,

Thank you for reporting this issue! Could you please check if it gets resolved by passing requires_grad=True into the builder.finalize() method?

self.model = builder.finalize(requires_grad=True)

We will update this example to ensure the model variables are all instantiated with autodifferentiation mode enabled.

Yep, works after passing requires_grad=True in!