mingyuliutw / UNIT

Unsupervised Image-to-Image Translation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

about multi-scale discriminator loss question

Johnson-yue opened this issue · comments

Hi, I am confuse about Discriminator Loss, in your paper ,

We use the LSGAN objective proposed by Mao et al. [38]. We
employ multi-scale discriminators proposed by Wang et al. [20] to guide the
generators to produce both realistic details and correct global structure.

So I check the pix2pixHD paper , I think your multi-scale discriminator loss is different from pix2pixHD feature match loss

d_loss from your code is here
I think your code function is :
loss = L2(out0 - 0) + L2( out1 -1 )

but in pix2pixHD code is here

I think his code function is :
loss = 1/2 * L1( out0 - out1 )

Is it difference ??

@Johnson-yue The pix2pixHD code you refer to is the feature matching loss. It is not the multi-scale discriminator loss. That's probably why you think they are different.

@mingyuliutw so, in this code , UNIT used multi-scale Discriminator Loss, but Pix2PixHD code I refer to is the feature matching loss? I think your implement of multi-scale D loss was refer to Pix2PixHD, sorry about that..

Both UNIT and pix2pixHD use multi-scale discriminators. But pix2pixHD multi-scale discriminator is a conditional one and also has a feature matching loss.