linzai1992 / melgan

MelGAN vocoder (compatible with NVIDIA/tacotron2)

Home Page:http://swpark.me/melgan/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

MelGAN

Unofficial PyTorch implementation of MelGAN vocoder

Key Features

  • MelGAN is lighter, faster, and better at generalizing to unseen speakers than WaveGlow.
  • This repository use identical mel-spectrogram function from NVIDIA/tacotron2, so this can be directly used to convert output from NVIDIA's tacotron2 into raw-audio.
  • Pretrained model on LJSpeech-1.1 via PyTorch Hub.

Prerequisites

Tested on Python 3.6

pip install -r requirements.txt

Prepare Dataset

  • Download dataset for training. This can be any wav files with sample rate 22050Hz. (e.g. LJSpeech was used in paper)
  • preprocess: python preprocess.py -c config/default.yaml -d [data's root path]
  • Edit configuration yaml file

Train & Tensorboard

  • python trainer.py -c [config yaml file] -n [name of the run]
    • cp config/default.yaml config/config.yaml and then edit config.yaml
    • Write down the root path of train/validation files to 2nd/3rd line.
    • Each path should contain pairs of *.wav with corresponding (preprocessed) *.mel file.
    • The data loader parses list of files within the path recursively.
  • tensorboard --logdir logs/

Pretrained model

Try with Google Colab: TODO

import torch
vocoder = torch.hub.load('seungwonpark/melgan', 'melgan')
vocoder.eval()
mel = torch.randn(1, 80, 234) # use your own mel-spectrogram here

if torch.cuda.is_available():
    vocoder = vocoder.cuda()
    mel = mel.cuda()

with torch.no_grad():
    audio = vocoder.inference(mel)

Inference

  • python inference.py -p [checkpoint path] -i [input mel path]

Results

See audio samples at: http://swpark.me/melgan/.

Implementation Authors

License

BSD 3-Clause License.

Useful resources

About

MelGAN vocoder (compatible with NVIDIA/tacotron2)

http://swpark.me/melgan/

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Python 100.0%