bugs in README.md and Documentation about evaluate
rainbowtp opened this issue · comments
The Argument which is in
torch_mimicry.metrics.evaluate() should be dataset ,but not dataset_name
err:
fid_score() got an unexpected keyword argument 'dataset_name'
README.md:
import torch
import torch.optim as optim
import torch_mimicry as mmc
from torch_mimicry.nets import sngan
# Data handling objects
... ...
# Start training
... ...
# Evaluate fid
mmc.metrics.evaluate(
metric='fid',
log_dir='./log/example',
netG=netG,
dataset_name='cifar10', # should be dataset='cifar10'
num_real_samples=50000,
num_fake_samples=50000,
evaluate_step=100000,
device=device)
Documentation:
# Evaluate fid
I think that's why:
argument: dataset
def fid_score(num_real_samples,
num_fake_samples,
netG,
dataset, # The argument which is in torch_mimicry.metrics.evaluate() should be same as this one
seed=0,
device=None,
batch_size=50,
verbose=True,
stats_file=None,
log_dir='./log'):
Agreed. Also, I am not sure what the attribute stats_file
means, since it is mandatory if a custom dataset is applied.
Thanks for raising this issue, I updated the library some time back but I forgot to update the README so it was still using an old argument that is not valid anymore. I've fixed this issue in a recent PR linked.
Please reinstall with pip install git+https://github.com/kwotsin/mimicry.git
for the latest version, thank you!