francois-rozet / zuko

Normalizing flows in PyTorch

Home Page:https://zuko.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Zuko's banner

Zuko - Normalizing flows in PyTorch

Zuko is a Python package that implements normalizing flows in PyTorch. It relies as much as possible on distributions and transformations already provided by PyTorch. Unfortunately, the Distribution and Transform classes of torch are not sub-classes of torch.nn.Module, which means you cannot send their internal tensors to GPU with .to('cuda') or retrieve their parameters with .parameters().

To solve this problem, zuko defines two abstract classes: DistributionModule and TransformModule. The former is any Module whose forward pass returns a Distribution and the latter is any Module whose forward pass returns a Transform. A normalizing flow is just a DistributionModule which contains a list of TransformModule and a base DistributionModule. This design allows for flows that behave like distributions while retaining the benefits of Module. It also makes the implementations easier to understand and extend.

In the Avatar cartoon, Zuko is a powerful firebender 🔥

Installation

The zuko package is available on PyPI, which means it is installable via pip.

pip install zuko

Alternatively, if you need the latest features, you can install it from the repository.

pip install git+https://github.com/francois-rozet/zuko

Getting started

Normalizing flows are provided in the zuko.flows module. To build one, supply the number of sample and context features as well as the transformations' hyperparameters. Then, feeding a context y to the flow returns a conditional distribution p(x | y) which can be evaluated and sampled from.

import torch
import zuko

# Neural spline flow (NSF) with 3 sample features and 5 context features
flow = zuko.flows.NSF(3, 5, transforms=3, hidden_features=[128] * 3)

# Train to maximize the log-likelihood
optimizer = torch.optim.AdamW(flow.parameters(), lr=1e-3)

for x, y in trainset:
    loss = -flow(y).log_prob(x)  # -log p(x | y)
    loss = loss.mean()

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

# Sample 64 points x ~ p(x | y*)
x = flow(y_star).sample((64,))

For more information, check out the documentation at zuko.readthedocs.io.

Available flows

Class Year Reference
MAF 2017 Masked Autoregressive Flow for Density Estimation
NSF 2019 Neural Spline Flows
NCSF 2020 Normalizing Flows on Tori and Spheres
SOSPF 2019 Sum-of-Squares Polynomial Flow
NAF 2018 Neural Autoregressive Flows
UNAF 2019 Unconstrained Monotonic Neural Networks
CNF 2018 Neural Ordinary Differential Equations

Contributing

If you have a question, an issue or would like to contribute, please read our contributing guidelines.

About

Normalizing flows in PyTorch

https://zuko.readthedocs.io

License:MIT License


Languages

Language:Python 100.0%