How to improve image fidelity?
soon-yau opened this issue · comments
Soon-Yau Cheong commented
(Left:generated. Right: from dataset)
I have trained on mannequin dataset and the results look quite good. However, the generated images are a bit blurry and fine details are lost. Therefore, I wonder what changes do I need to do make them look crispier.
I currently use VQGAN pretrained on imagenet. I have also tried to train a VAE from scratch (using default train_vae.py) but it is blurry. I tried increasing the number of layers, number of tokens etc but didn't see improvement and made it a bit more unstable. Any advise on what VAE parameters to change?
Romain Beaumont commented
You could try to use vqgan 16k or even the f=8 ( see the details on their
page)
…On Thu, Nov 25, 2021, 01:21 Soon-Yau Cheong ***@***.***> wrote:
[image: brown]
<https://user-images.githubusercontent.com/19167278/143328331-6fb1f1a7-1094-478d-a0c5-615192a9b3e9.png>
I have trained on mannequin dataset and the results look quite good.
However, the generated images are a bit blurry and fine details are loss.
Therefore, I wonder what changes do I need to do make them look better.
I currently use VQGAN pretrained on imagenet. I have also tried to train a
VAE from scratch (using default train_vae.py) but it is blurry. I tried
increasing the number of layers, number of tokens etc but didn't see
improvement and made it a bit more unstable. Any advise on what VAE
parameters to change?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#390>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAR437XBIS5MZ6L4P3HSRQLUNV6RRANCNFSM5IXI2X6Q>
.
Soon-Yau Cheong commented