This repository provides a PyTorch implementation of SpeechSplit, which enables more detailed speaking style conversion by disentangling speech into content, timbre, rhythm and pitch.
To ensure respect for privacy rights and responsible use of our code, we are only releasing a portion of our code for demonstration purposes only. If you are interested in training, please send an email to auspicious3000@gmail.com with name, affiliation and a description of how the code will be used for your research.
This is a short video that explains the main concepts of our work. If you find this work useful and use it in your research, please consider citing our paper.
@article{qian2020unsupervised,
title={Unsupervised speech decomposition via triple information bottleneck},
author={Qian, Kaizhi and Zhang, Yang and Chang, Shiyu and Cox, David and Hasegawa-Johnson, Mark},
journal={arXiv preprint arXiv:2004.11284},
year={2020}
}
The audio demo for SpeechSplit can be found here
- Python 3.6
- Numpy
- Scipy
- PyTorch >= v1.2.0
- librosa
- pysptk
- soundfile
- matplotlib
- wavenet_vocoder
pip install wavenet_vocoder==0.1.1
for more information, please refer to https://github.com/r9y9/wavenet_vocoder
Download pre-trained models to assets
Download the same WaveNet vocoder model as in AutoVC to assets
Run demo.ipynb
Download training data to assets
.
The provided training data is very small for code verification purpose only.
Please use the scripts to prepare your own data for training.
-
Extract spectrogram and f0:
python make_spect_f0.py
-
Generate training metadata:
python make_metadata.py
-
Run the training scripts:
python main.py
Please refer to Appendix B.4 for training guidance.
This project is part of an ongoing research. We hope this repo is useful for your research. If you need any help or have any suggestions on improving the framework, please raise an issue and we will do our best to get back to you as soon as possible.