maum-ai / faceshifter

Unofficial PyTorch Implementation for FaceShifter (https://arxiv.org/abs/1912.13457)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Have you ever encountered this phenomenon: attr loss down to near 0 and rec loss keeps at 0.01 and doesn't got down?

buzhangjiuzhou opened this issue · comments

image
image

And the results :
image

And before this, I trained with the origin setup of you, it run well before 400k steps, and at that point, the attr loss suddenly went down to near 0, at the same time, the rec loss went to 0.01 from 1e-3, and the results ruined.

Hi, I think your model has a problem with mode collapse.
I think we bumped into each other. G or D no longer competitive learning and fell into the wrong minima.

It is thought that a problem occurs when the batch_size is set small or the dataset is small while the capacity of the model is high.

How about increasing the data size or increasing the batch_size?
Or I think it would be good to find various solutions on Google.

Hi, I think your model has a problem with mode collapse. I think we bumped into each other. G or D no longer competitive learning and fell into the wrong minima.

It is thought that a problem occurs when the batch_size is set small or the dataset is small while the capacity of the model is high.

How about increasing the data size or increasing the batch_size? Or I think it would be good to find various solutions on Google.

I realized the problem and decreased the learning rate(because of the limitation of devices I can access).

It helped.