Understanding SoftclipTransform
arnauqb opened this issue · comments
Description
I'm trying to limit the output domain of a normalizing flow. I noticed that there is a zuko.transforms.SoftclipTransform
transformation that could be useful for my case.
Reproduce
This is what I have tried:
class MyFlow(zuko.flows.FlowModule):
def __init__(
self,
features: int,
context: int = 0,
transforms: int = 3,
randperm: bool = False,
**kwargs,
):
orders = [
torch.arange(features),
torch.flipud(torch.arange(features)),
]
transforms = [
zuko.flows.MaskedAutoregressiveTransform(
features=features,
context=context,
order=torch.randperm(features) if randperm else orders[i % 2],
**kwargs,
)
for i in range(transforms)
]
base = zuko.flows.Unconditional(
zuko.distributions.DiagNormal,
torch.zeros(features),
torch.ones(features),
buffer=True,
)
transforms.append(zuko.flows.Unconditional(zuko.transforms.SoftclipTransform))
super().__init__(transforms, base)
Expected behavior
I would expect the samples to be contrained in the [-5, 5] domain, since that is the default argument of SoftClipTransform.
I have two questions:
- Why is the domain not limited to [-5, 5]?
flow = MyFlow(1, 1, transforms=5, hidden_features=[50])
samples = flow(torch.rand(1)).sample((10000,))
plt.hist(samples.numpy(), bins=100);
- How can I change the bound argument of the transformation? I noticed that the transformation class is passed to the flow, instead of its instance, so I'm not sure how one can change the initialization arguments.