about fake_B_random
kiraracreams opened this issue · comments
I trained a task of coloration use my own dataset about 22000,last time,i can't get good result both fake_B_encoded and fake_B_random.now I add nosie into middle layer and nz=512,although fake_B_encoded is good,checkerboard artifacts of fake_B_random is very serious,I attemp to use upsampling+conv,but the effect has not improved.Is it a problem of too few datasets?,and the task of coloration is complex.so it lead the Encoder can not encode pictrue in a continuous space.could you give me some suggestion?
I think nz=512 is too high, it simply overfits. On data it knows, it has a perfect latent representation, which leads to a perfect coloration. On a random latent vector the network is unable to produce anything sensible. But please correct me if I am wrong on this :)
Thanks for your answer!
I try to smaller nz and higher lambda_kl,because nz is too high,to avoid gradient explosion I set lambda_kl = 0.0001.
Good luck!