1ytic / warp-rnnt

CUDA-Warp RNN-Transducer

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question about average_frames and reduction parmas

wl-junlin opened this issue · comments

I want to have a stable loss which is rubust to labels_lengths when training.
What value should I pass to this two parmas?

What's more, what is the approximate relationship between loss and actual wer?
For example, if I want a wer aroud 0.5. How much should be the value of the loss?

You shouldn't average over frames. If I remember correctly, theoretically it doesn't make sense. The loss is calculated for the entire utterance.

There is no a direct link between the RNN-T loss value and WER. I think a good analogue would be the negative log-likelihood and the accuracy of a classifier.