MakinoharaShoko / tacotron2-japanese

Tacotron2 implementation of Japanese

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Reference: NVIDIA/tacotron2

How to use

  1. Put raw Japanese texts in ./filelists
  2. Put WAV files in ./wav
  3. (Optional) Download NVIDIA's pretrained model
  4. Open ./train.ipynb to install requirements and start training
  5. Download NVIDIA's WaveGlow model
  6. Open ./inference.ipynb to generate voice

Cleaners

File ./hparams.py line 30

1. 'japanese_cleaners'

Before

何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも

After

nanikaacltaraitsudemohanashItekudasai.gakuiNnokotojanaku,shijinikaNsurukotodemonanidemo.

2. 'japanese_tokenization_cleaners'

Before

何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも

After

nani ka acl tara itsu demo hanashi te kudasai. gakuiN no koto ja naku, shiji nikaNsuru koto de mo naNdemo.

3. 'japanese_accent_cleaners'

Before

何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも

After

:na)nika a)cltara i)tsudemo ha(na)shIte ku(dasa)i.:ga(kuiNno ko(to)janaku,:shi)jini ka(Nsu)ru ko(to)demo na)nidemo.

4. 'japanese_phrase_cleaners'

Before

何かあったらいつでも話して下さい。学院のことじゃなく、私事に関することでも何でも

After

nanika acltara itsudemo hanashIte kudasai. gakuiNno kotojanaku, shijini kaNsuru kotodemo nanidemo.

Models

Remember to change this line in ./inference.ipynb

sequence = np.array(text_to_sequence(text, ['japanese_cleaners']))[None, :]

Sanoba Witch

Ayachi NeNe

  • Model 1 ['japanese_cleaners']
  • Model 2 ['japanese_tokenization_cleaners']
  • Model 3 ['japanese_accent_cleaners']

Inaba Meguru

  • Model 1 ['japanese_tokenization_cleaners']
  • Model 2 ['japanese_tokenization_cleaners']

Senren Banka

Takemoto Yoshino

  • Model 1 ['japanese_tokenization_cleaners']

About

Tacotron2 implementation of Japanese

License:BSD 3-Clause "New" or "Revised" License


Languages

Language:Jupyter Notebook 83.5%Language:Python 16.4%Language:Dockerfile 0.0%