kwotsin / mimicry

[CVPR 2020 Workshop] A PyTorch GAN library that reproduces research results for popular GANs.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Some cons when using metric...

Leiwx52 opened this issue · comments

commented

Hi, thank you for your contributions! I found this repo really easy to follow.

However, when I called mmc.metrics.evaluate to evaluate FID score, it turns out that the path to dataset is bound to './datasets', which might not be convenient if someone put their datasets out of the GAN project.

Also, would you mind providing some details of parameter combinations(i.e. batch_size, n_dis in training process and num_samples, num_real_samples and num_fake_samples in evaluating) by which the model can achieve the baseline's performance(IS, FID, KID)?

Hi @WingsleyLui , thanks for your kind comments! I also realised this from a similar issue #20 , and so added support for using custom datasets. In general, for using custom datasets, one only needs to implement a Dataset object and feed it into the dataset argument. I also added an example to see how to use the same with the evaluate API here: https://github.com/kwotsin/mimicry/blob/master/examples/eval_pretrained.py#L89-L99

Feel free to let me know your thoughts on this addition!

commented

@kwotsin Thank you for your response! This helps a lot!

Besides, are you going to implement the visualization of spectral norm, weight norm of each layer in training process with tensorboard? Also the largest singular value might helps to find out the mode collapse. Although this might be computationally expensive, I thought this might set to be an optional choice in implementation cuz a lot of researchers did this in their analysis section.

@WingsleyLui Thanks for your suggestions, I will take a look at the SN visualisation as well 👍