NVIDIA / warp

A Python framework for high performance GPU simulation and graphics

Home Page:https://nvidia.github.io/warp/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

clone, copy, assign are not differentiable

xuan-li opened this issue · comments

The clone, copy, assign seem non-differentiable.

To reproduce:

import warp as wp
import numpy as np
wp.init()

device = wp.get_device()
state_in = wp.from_numpy(np.array([1., 2., 3.]).astype(np.float32), dtype=wp.float32, requires_grad=True, device=device)
state_out = wp.zeros(state_in.shape, dtype=wp.float32, requires_grad=True, device=device)

@wp.kernel
def copy(a: wp.array(dtype=wp.float32), b: wp.array(dtype=wp.float32)):
    tid = wp.tid()
    b[tid] = a[tid]

tape = wp.Tape()
with tape:
    ######  not working ######
    state_out = wp.clone(state_in)
    # wp.copy(state_out, state_in)
    # state_out.assign(state_in)
    
    ###### working ######
    # wp.launch(kernel=copy, inputs=[state_in], outputs=[state_out], dim=state_in.shape[0])
    
grads = {state_out: wp.from_numpy(np.array([1., 1., 1.]).astype(np.float32), dtype=wp.float32)}
tape.backward(grads=grads)
print(state_in.grad.numpy())

However, I can work around using a self-defined copy kernel.

I noticed that Warp uses a lot of these operators in XPBD simulator if gradients are required, for example:

https://github.com/NVIDIA/warp/blob/db5fffd22bd379d91f0c84161066b755186a9bab/warp/sim/integrator_xpbd.py#L2056C41-L2056C46

particle_q.assign(new_particle_q)