kamenbliznashki / normalizing_flows

Pytorch implementations of density estimation algorithms: BNAF, Glow, MAF, RealNVP, planar flows

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Model behaving differently in train and eval modes

RuskinManku opened this issue · comments

I am training a RealNvP model on some distribution, however I am getting very different loss when I switch the model from eval() mode to train() mode or vice versa. I know that changing modes in PyTorch basically changes the behavior of dropout and batchnorm, and then it makes sense for the loss to change, as the RealNVP model contains batch_norm layers . However, this difference is giving an error regarding my input being out of support when I am doing eval. One way around this is to deactivate the batch_norm, which is a direct argument to RealNVP. However, is this really the only way? And also, if batch_norm affects distribution like this, then why do we add it?