akanimax / pro_gan_pytorch

Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

pretained model

fededemo opened this issue · comments

Hi @akanimax! Great work, really. Is it possible to train the model on a large dataset and then use the trained weights as a pretraining step for another one? Thanks for your help

Hi @fededemo,
Thanks for the kind words 😄.
Doing transfer learning on GANs requires the original to be trained on a large and diverse dataset. I believe fine-tuning models trained on ImageNet (BigGAN, StyleGANXL) or the latest and the greatest GIGA-GAN trained on LAION would yield good results when fine-tuned on other datasets.

Hi @akanimax!
Thanks for your answer. I’m agree with you but my question it’s more about how can I do that with your code?
It would be really great for me knowing it.
Thanks again

Ah I see. No, unfortunately we don't have any models on ImageNet, so it's not possible with this code ☹️.

But if I have one, can I do it? We re planning to train one with Chest X ray and after, fine tuning it with other x ray small datasets to do some kind of “data augmentation”.
Thanks for your help.

We have that big x ray dataset and our idea is to train the network from scratch with it, using your code, for a long time. Then use that pretrain to do fine tuning, again with your code,on another much smaller dataset with some rare diseases. We will like to know if it's possible using your code?

Right I see. There isn't an option for resuming from a checkpoint in the training_script right now. But, all the information that is needed to do so is in the checkpoints (checkpoint saving code).

Adding resume functionality to the training script is easy, you can check how generator loading is done for the latent_interpolation script.

I can try to add this option to the training_script over the weekend.

That would be great for us. Thanks again mate for your work and even more, for your help! 🥇

Hi @akanimax!

@mauricio-repetto and I have been working on a potential solution to enable the retraining feature for your work. If you let us, we would like to submit a PR that includes modifications to networks.py, train.py, and updates to the README documentation.

Thank you!

Yeah sure, please feel free to open a PR. I'll go through it.

Hi! I think that we may need access in order to push a new branch, its correct? Thanks!

Hey, so to keep things inline with previous contributions, please use the usual fork->implement->pr method.

HI! we just did it in: #69.
thanks!