rishikksh20 / VocGAN

VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Modified VocGAN


This repo implements modified version of [VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network](https://arxiv.org/abs/2007.15256) using Pytorch, for actual VocGAN checkout to `baseline` branch. I bit modify the VocGAN's generator and used Full-Band MelGAN's discriminator instead of VocGAN's discriminator, as in my research I found MelGAN's discriminator is very fast while training and enough powerful to train Generator to produce high fidelity voice whereas VocGAN Hierarchically-nested JCU discriminator is quite huge and extremely slows the training process.

Tested on Python 3.6

pip install -r requirements.txt

Prepare Dataset

  • Download dataset for training. This can be any wav files with sample rate 22050Hz. (e.g. LJSpeech was used in paper)
  • preprocess: python preprocess.py -c config/default.yaml -d [data's root path]
  • Edit configuration yaml file

Train & Tensorboard

  • python trainer.py -c [config yaml file] -n [name of the run]

    • cp config/default.yaml config/config.yaml and then edit config.yaml
    • Write down the root path of train/validation files to 2nd/3rd line.
  • tensorboard --logdir logs/

Notes

  1. This repo implements modified VocGAN for faster training although for true VocGAN implementation please checkout baseline branch, In my testing I am available to generate High-Fidelity audio in real time from Modified VocGAN.
  2. Training cost for baseline VocGAN's Discriminator is too high (2.8 sec/it on P100 with batch size 16) as compared to Generator (7.2 it/sec on P100 with batch size 16), so it's unfeasible for me to train this model for long time.
  3. May be we can optimizer baseline VocGAN's Discriminator by downsampling the audio on pre-processing stage instead of Training stage (currently I used torchaudio.transform.Resample as layer for downsampling the audio), this step might be speed-up overall Discriminator training.
  4. I trained baseline model for 300 epochs (with batch size 16) on LJSpeech, and quality of generated audio is similar to the MelGAN at same epoch on same dataset. Author recommend to train model till 3000 epochs which is not feasible at current training speed (2.80 sec/it).
  5. I am open for any suggestion and modification on this repo.
  6. For more complete and end to end Voice cloning or Text to Speech (TTS) toolbox 🤖 please visit Deepsync Technologies.

Inference

  • python inference.py -p [checkpoint path] -i [input mel path]

Pretrained models

Two pretrained model are provided. Both pretrained models are trained using modified-VocGAN structure.

Audio Samples

Using pretrained models, we can reconstruct audio samples. Visit here to listen.

Results

[WIP]

References

About

VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network

License:MIT License


Languages

Language:Python 100.0%