TachibanaYoshino / AnimeGAN

A Tensorflow implementation of AnimeGAN for fast photo animation ! This is the Open source of the paper 「AnimeGAN: a novel lightweight GAN for photo animation」, which uses the GAN framwork to transform real-world photos into anime images.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

What's d-loss value is good? and use default parameters, can I build same AnimeGAN.model-60.meta ??

tankfly2014 opened this issue · comments

I try run example point file AnimeGAN.model-60.meta. from epoch=60. start run train.
d_loss show start value 2. gogogo, My train result d_loss>2, =20. The number d_loss has high and the situation has deteriorated.

I just failed to use the default parameters. Adjustments must be made at any time.
If this program wants to be great, the d-loss must be maintained at least to <n? 5? 7?

Let me talk about my opinion. First, you don’t need to pay too much attention to the discriminator’s loss, because the loss is not a quantitative evaluation metirc. Secondly, every training from scratch has a certain randomness, which does not mean that the effect will be completely reproduced. Finally, the default parameters are consistent with those in the paper and are for reference only. The 60 epoch pre-training weights I released were selected from the verification set results. There is no need to continue training from it.

okok.