mchong6 / JoJoGAN

Official PyTorch repo for JoJoGAN: One Shot Face Stylization

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Perceptual loss: LPIPS vs. StyleGAN Discriminator

matanby opened this issue · comments

Hi!

Thanks for sharing this awesome work :-)

I'm wondering on the difference in perceptual image quality when using the LPIPS model (as stated in the paper) vs. the StyleGAN discriminator (as used in the updated collab notebook) for the perceptual loss.
In your experience, what kind of difference does using the StyleGAN discriminator have on the image quality, when compared to using LPIPS?

Hi, the paper should be updated on arxiv with some comparison. But generally lpips captures less details and has some downsampling artifacts.

I've just noticed the updated paper, and read through the relevant section. Thanks for the answer!