openmm / openmm-torch

OpenMM plugin to define forces with neural networks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Supporting any Torch model (not just TorchScript)

RaulPPelaez opened this issue · comments

commented

It would be great to be able to use a torch.compile'd model in OpenMM-Torch.

AFAIK there is no way to cross the Python-C++ barrier with a torch.compile'd model. Nothing like torch::jit::Module for TorchScript.

I can think of two solutions:

  1. Let TorchForce accept any Python class as module (as long as it has the right Tensor input/outputs to forward).
    Doing this requires sending a generic Python class/function to C++ through SWIG, which I have not been successful in doing. It is really easy with pybind, but I cannot manage to mix pybind and swig for the life of me.
  2. Take a TorchScript model like we do now, but internally call torch.compile.
    There is no way AFAIK to call torch.compile from C++, so we would have to invoke that via pybind at, perhaps, the TorchForce constructor. Then a py::object would be stored instead of a torch::jit::Module.

I think 2 is the simplest way with the current state of the codebase. Allowing something not easily serializable as the model (aka not TorchScript) would make serializing TorchForce an issue.

I would like to hear your thoughts on this!

here is a relevant PyTorch forum thread. At some point torch.export will exist as a option maybe
https://dev-discuss.pytorch.org/t/the-future-of-c-model-deployment/1282

Edit: maybe it is already possible, this looks promising: https://pytorch.org/docs/main/torch.compiler_aot_inductor.html

Any solution that requires a Python runtime to be present will be very limiting. Think of Folding@home, for example.

It sounds like this is all still in flux, and there are important things torch.compile can't do yet. Hopefully once everything settles down, there will be a clear migration path. I really hope they continue to support jit compilation, though. Having to rely on ahead-of-time compiled libraries would also be very limiting, and likely infeasible for important use cases.

commented

Steve's AOT thingy seems to be the only pytorch-endorsed way, but I have zero faith it is actually usable as of today -.-
I agree, Peter, things are super experimental still.
It is a same though to be able to, in the same Python script, run a torch.compiled model and run an OpenMM simulation but not mix the too. From an user perspective it is a bit frustrating.