dome272 / Diffusion-Models-pytorch

Pytorch implementation of Diffusion Models (https://arxiv.org/pdf/2006.11239.pdf)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Training generates images with full red output

AdamWojtczak opened this issue · comments

While training on not changed model with a different dataset (portraits of faces) I am getting bunch of full red outputs:
image
I also changed the code to train on the same dataset but greyscaled before training and as a result I still get monocolored outputs but this time they are either white or black:
image
Has anyone had the same issue? Is there something I can do to prevent this?

Hey, can you try to train on the original dataset I used and tell me if you get the same results or if this training also does not work.

These are the results of training on landsacapes dataset. What I changed is the batch size. My GPU has only 4 GB so it has to be only 2.
image

Interesting, I opened a similar issue for the following repository cloneofsimo/minDiffusion#4