A PyTorch based library for all things neural differential equations. Maintained by DiffEqML.
git clone https://github.com/DiffEqML/torchdyn.git
cd torchdyn
python setup.py install
Interest in the blend of differential equations, deep learning and dynamical systems has been reignited by recent works [1,2]. Modern deep learning frameworks such as PyTorch, coupled with progressive improvements in computational resources have allowed the continuous version of neural networks, with versions dating back to the 80s [3], to finally come to life and provide a novel perspective on classical machine learning problems (e.g. density estimation [4])
Since the introduction of the torchdiffeq
library with the seminal work [1] in 2018, little effort has been made by the PyTorch research community on an unified framework for neural differential equations. While significant progress is being made by the Julia community and SciML [5], we believe a native PyTorch version of torchdyn
with a focus on deep learning to be a valuable asset for the research ecosystem.
Central to the torchdyn
approach are continuous neural networks, where width, depth (or both) are taken to their infinite limit. On the optimization front, we consider continuous "data-stream" regimes and gradient flow methods, where the dataset represents a time-evolving signal processed by the neural network to adapt its parameters.
By providing a centralized, easy-to-access collection of model templates, tutorial and application notebooks, we hope to speed-up research in this area and ultimately contribute to turning neural differential equations into an effective tool for control, system identification and common machine learning tasks.
torchdyn
is developed and maintained by the core DiffEqML team, with the generous support of the deep learning community.
torchdyn
leverages modern PyTorch best practices and handles training with pytorch-lightning
[6]. We build Graph Neural ODEs utilizing the Graph Neural Networks (GNNs) API of dgl
[6].
Our aim with torchdyn
aims is to provide a unified, flexible API to the most recent advances in continuous deep learning. Examples include neural differential equations variants, e.g.
- Neural Ordinary Differential Equations (Neural ODE) [1]
- Neural Stochastic Differential Equations (Neural SDE) [7,8]
- Graph Neural ODEs [9]
- Hamiltonian Neural Networks [10]
Depth--variant versions,
Recurrent or "hybrid" versions
Augmentation strategies to relieve neural differential equations of their expressivity limitations and reduce the computational burden of the numerical solver
Alternative or modified adjoint training techniques
The current version of torchdyn
contains the following self-contained quickstart examples / tutorials (with a lot more to come):
00_quickstart
: offers a quickstart guide fortorchdyn
and Neural DEs01_cookbook
: here, we explore the API and how to define Neural DE variants withintorchdyn
02_classification
: convolutional Neural DEs on MNIST03_crossing_trajectories
: a standard benchmark problem, highlighting expressivity limitations of Neural DEs, and how they can be addressed.04_augmentation_strategies
: augmentation API for Neural DEs
and the advanced tutorials
05_integral_adjoint
: minimize integral losses withtorchdyn
's special integral loss adjoint [18] to track a sinusoidal signal.06_hamiltonian_neural_network
: learn dynamics of energy preserving systems with a simple implementation ofHamiltonian Neural Networks
intorchdyn
[10]07_neural_graph_de
: first steps into the vast world of Neural GDEs [9], or ODEs on graphs parametrized by graph neural networks (GNN). Classification on Cora.
Check our wiki
for a full description of available features.
The current offering of torchdyn
is limited compared to the rich ecosystem of continuous deep learning. If you are a researcher working in this space, and particularly if one of your previous works happens to be a WIP feature
, feel free to reach out and help us in its implementation.
In particular, we are missing the following, which will be added, in order.
- Latent variable variants: Latent Neural ODE, ODE2VAE
- Advanced recurrent versions: GRU-ODE-Bayes
- Alternative adjoint for Neural SDE and Jump Stochastic Neural ODEs, as in [16]
- Lagrangian Neural Networks [17]
torchdyn
is meant to be a community effort: we welcome all contributions of tutorials, model variants, numerical methods and applications related to continuous deep learning.
If you find torchdyn
valuable for your research or applied projects:
@article{massaroli2020stable,
title={Stable Neural Flows},
author={Massaroli, Stefano and Poli, Michael and Bin, Michelangelo and Park, Jinkyoo and Yamashita, Atsushi and Asama, Hajime},
journal={arXiv preprint arXiv:2003.08063},
year={2020}
}