NVlabs / edm

Elucidating the Design Space of Diffusion-Based Generative Models (EDM)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Zero initialization of convolutions

nicolas-dufour opened this issue · comments

Hi,
I have observed that the code carefully initialize certain convolutions with zeros init.
Do you have any reference for this kind of design decision?

Thanks!

Hi, I am also confused about the weight initialization in different implementations.

Each implementation has its own initialization style

In the official DDPM repo, the convs before residual connections and the final conv are initialized with zeros, while other convs are initialized with zero-mean uniform distributions.
In the ADM guided-diffusion repo, the convs before residual connections and the final conv are also initialized with zeros, while others are initialized by PyTorch default.
In the Score-Based SDE repo, the implementation covers both DDPM/NCSN style initialization.
In this repo, I think it's similar to the Score-Based SDE, but it's still different to the three codebase mentioned above.

My experiments and observations

Recently, I tried to train diffusion models (DDPM, DDIM, EDM, ...) with the original basic UNet (35.7M #params) on CIFAR-10. Here are some observations:

  • I can successfully reproduce the FIDs reported by DDPM and DDIM without any custom weight initialization. All parameters are initialized by PyTorch default.
  • However, my optimal learning rate differs from those in the official repo (1e-4 vs 2e-4). When I tried the official one (2e-4), the FID result got far worse.
  • I train the EDM model with my no-initialization 35.7M mini network with my learning rate, and the results are reasonable (better than DDIM).
  • However, when I train with the EDM proposed 10e-4 learning rate, the FID result got far worse. To confirm it, I replace the networks.py with mine and run with the official EDM code, the FID is still bad.

Seemingly, the mathematical diffusion model (training + sampler) can be decoupled as an individual component. But the neural network model (and its initialization) may be strongly coupled with the hyper-parameters (?).

I wonder if it is really the case, and why the initialization / hyper-parameter matters a lot.