jozerozero / TI_SV-zijian-release-1.0.0

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Text Independant Speaker Verification Using GE2E Loss

Tensorflow implementation of Text Independent Speaker Verification based on Generalized End-to-End Loss for Speaker Verification and Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis

Data

Both papers above used internal data which consist of 36M utterances from 18K speakers. In this repository, the original dataset was substituted with the combination of VoxCeleb1,2 and LibriSpeech. All of them are available for free. The whole data of those 3 have 10% EER whereas the original one has 5% EER according to Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis. Below are links for the data
Downloading data will be soon added to preprocess.py. Before that, manually download dataset using the links below

LibriSpeech

VoxCeleb1,2

Prerequisites

Use requirements.txt for installing python packages.

pip install -r requirements.txt

  • Python
  • Tensorflow-gpu 1.6.0
  • NVIDIA GPU + CUDA 9.0 + CuDNN 7.0

Training

1. Preprocess wav data into spectrogram

  • VoxCeleb1 each has a tree structure like below
wav_root - speaker_id - video_clip_id - 00001.wav
                                      - 00002.wav
                                      - ...
                                      
  • VoxCeleb2 each has a tree structure like below
wav_root - speaker_id - video_clip_id - 00001.m4a
                                      - 00002.m4a
                                      - ...
                                      
wav_root - speaker_id - speaker_id-001.wav
                      - speaker_id-002.wav
                      - ...
  • Run preprocess.py
python preprocess.py --in_dir /home/ninas96211/data/libri --pk_dir /home/ninas96211/data/libri_pickle --data_type libri
python preprocess.py --in_dir /home/ninas96211/data/vox1 --pk_dir /home/ninas96211/data/vox1_pickle --data_type vox1
python preprocess.py --in_dir /home/ninas96211/data/vox2 --pk_dir /home/ninas96211/data/vox2_pickle--data_type vox2

2. Train

  • Run train.py
python train.py --in_dir /home/ninas96211/data/wavs_pickle --ckpt_dir ./ckpt

3. Infer

  • Using data_gen.sh, create a directory for test where wavs have names like [speaker_id]_[video_clip_id]_[wav_number].wav
bash data_gen.sh /home/ninas96211/data/test_wav/id10275/CVUXDNZzcmA/00002.wav ~/data/test_wav_set
  • Run inference.py
python inference.py --in_wav1 /home/ninas96211/data/test_wav_set/id10309_pwfqGqgezH4_00004.wav --in_wav2 /home/ninas96211/data/test_wav_set/id10296_f_k09R8r_cA_00004.wav --ckpt_file ./ckpt/model.ckpt-35000

CUDA_VISIBLE_DEVICES=9 python inference.py --in_dir data --in_wav2 tmp/voice_bb_clip_id_1_000646.wav --ckpt_file xckpt/model.ckpt-58100

Current Issues

  • @jaekukang cloned this repository and he trained this model successfully. In inference.py, however, he found a bug. I fixed the bug.

About


Languages

Language:Python 99.2%Language:Shell 0.8%