fatchord / WaveRNN

WaveRNN Vocoder + TTS

Home Page:https://fatchord.github.io/model_outputs/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The result is not deterministic when infer same input.

xinzheshen opened this issue · comments

Hello, when I infer the same input in my modified version code, the output is not deterministic sometimes.
I find that the sample may be not deterministic. Is it normal? And could it be avoided?
And I have set the random seed at the beginning as follow.

def setup_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
np.random.seed(seed)
random.seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

Do you have any suggestions? Thank you.

Rayhane-mamah/Tacotron-2#155 (comment)

ibab/tensorflow-wavenet#347

Yeah, That's totally normal.
Even if you use softmax rather than MoL, random sampling from softmax distribution has better results than choosing argmax.

Thank you @mindmapper15 . I got it a little. and still confused.
For example, when I execute code below to sample 10 values many times, it has same result. So the distribution and random seed are fixed, the result should be same, isn't it?
seed = 1234
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
m = torch.distributions.Categorical(torch.tensor([0.25, 0.25, 0.4, 0.1]).cuda())
for i in range(10):
print(m.sample())

output:
tensor(0, device='cuda:0')
tensor(2, device='cuda:0')
tensor(0, device='cuda:0')
tensor(2, device='cuda:0')
tensor(0, device='cuda:0')
tensor(2, device='cuda:0')
tensor(3, device='cuda:0')
tensor(2, device='cuda:0')
tensor(2, device='cuda:0')
tensor(0, device='cuda:0')

If so, when infer same input, the result should deterministic, why not?

I'm not sure what the cause is...
But I found something interesting comment on the other repository.

pytorch/pytorch#7068 (comment)

Maybe you should checkout!
I hope you solve the problem :)

@mindmapper15 Thank you :)

Try setting

torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

as suggested by this article on Reproducibility in PyTorch.

Edit: Just noticed, link leads to the same article as pytorch/pytorch#7086.