kan-bayashi / PytorchWaveNetVocoder

WaveNet-Vocoder implementation with pytorch.

Home Page:https://kan-bayashi.github.io/WaveNetVocoderSamples/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

How to generate speech from features with WORLD vocoder

ghostcow opened this issue · comments

Hi,

I'm trying to debug my system that uses you WaveNet vocoder. Is there any way to create WAV from the features your code generates?

Thanks

What does it mean?
You can run one of the recipes, then you can get generated wav file from feature vectors.

I meant, by using the WORLD vocoder instead of the WaveNet.
My question essentially is, why do you change the F0 when creating the features? Is that really necessary?

I have tried both (f0 and continuous f0 + uv information), but I did not compare them strictly.
The reason why I use continuous f0 is based on our experiences.
Anyway, both work, therefore, maybe you can use f0 directly, instead of continuous f0 + uv.

And our feature extraction step is based on the sprocket, which use world in internal processing.
By using sprocket.speech.Synthesizer, you can generate voice with world vocoder.

Thanks! I'll try your suggestions!