is the pytorch backend differentiable?
ni-chen opened this issue · comments
Hi,
Thanks for making this amazing package, this could be useful.
May I ask if the PyTorch backend is differentiable?
Thanks.
Hey @ni-chen ,
this has been on my todo list for too long, currently there are a few in-place operations preventing this... However, I don't think it should be too difficult to change, I expect the simulator to work slower, however...
Maybe when I find the time I might finally give it a shot soon. This would obviously be an awesome feature to have.
Thanks for your reply.
I believe it would be of great interest for many applications. @flaport
Would you be able to point to the places where the changes would have to be made? Could see if I could try to make a PR on it if it's not a very big change?
Would the general approach of preallocating the field tensors and write to a new index instead of updating an already populated tensor be the general solution you are thinking of?