kan-bayashi / PytorchWaveNetVocoder

WaveNet-Vocoder implementation with pytorch.

Home Page:https://kan-bayashi.github.io/WaveNetVocoderSamples/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Integration with Merlin

Mark-Leisten-ajalaco opened this issue · comments

Has anyone looked at bootstrapping the wavenet vocoder to Merlin (https://github.com/CSTR-Edinburgh/merlin/)? Merlin is an open-source TTS system (which uses Ossian or Festival as a front-end) for acoustic and duration modelling by default uses the WORLD vocoder and therefore extracts world vocoder features, as such it seems that an integration of this with Merlin should be possible. Just interested to see if someone has tried this out, and if they can offer some guidance.

It's interesting, I think you can replace the WORLD synthesis step with the Wavenet-based waveform generation.
In their synthesis script (https://github.com/CSTR-Edinburgh/merlin/blob/master/misc/scripts/vocoder/world/synthesis.py), the synthesis part is from line 120 to the end. The 3 input files are *.f0, *.sp, *.bapd. The data is in double type. The *.bapd is band-aperiodicity (or coarse aperiodicity). I'm not sure our wavelet-based synthesis uses coarse aperiodicity or full-band aperiodiciy (full-band aperiodicity has fft_size / 2 + 1 dimensions)