English | ไธญๆ
An open source implementation of Microsoft's VALL-E X zero-shot TTS model.
We release our trained model to the public for research or application usage.
VALL-E X is an amazing multilingual text-to-speech (TTS) model proposed by Microsoft. While Microsoft initially publish in their research paper, they did not release any code or pretrained models. Recognizing the potential and value of this technology, our team took on the challenge to reproduce the results and train our own model. We are glad to share our trained VALL-E X model with the community, allowing everyone to experience the power next-generation TTS! ๐ง
More details about the model are presented in model card.
2023.08.30
- Replaced EnCodec decoder with Vocos decoder, improved audio quality. (Thanks to @v0xie)
2023.08.23
- Added long text generation.
2023.08.20
- Added Chinese README.
2023.08.14
- Pretrained VALL-E X checkpoint is now released. Download it here
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
Note: If you want to make prompt, you need to install ffmpeg and add its folder to the environment variable PATH.
When you run the program for the first time, it will automatically download the corresponding model.
If the download fails and reports an error, please follow the steps below to manually download the model.
(Please pay attention to the capitalization of folders)
-
Check whether there is a
checkpoints
folder in the installation directory. If not, manually create acheckpoints
folder (./checkpoints/
) in the installation directory. -
Check whether there is a
vallex-checkpoint.pt
file in thecheckpoints
folder. If not, please manually download thevallex-checkpoint.pt
file from here and put it in thecheckpoints
folder. -
Check whether there is a
whisper
folder in the installation directory. If not, manually create awhisper
folder (./whisper/
) in the installation directory. -
Check whether there is a
medium.pt
file in thewhisper
folder. If not, please manually download themedium.pt
file from here and put it in thewhisper
folder.
Not ready to set up the environment on your local machine just yet? No problem! We've got you covered with our online demos. You can try out VALL-E X directly on Hugging Face or Google Colab, experiencing the model's capabilities hassle-free!
VALL-E X comes packed with cutting-edge functionalities:
-
Multilingual TTS: Speak in three languages - English, Chinese, and Japanese - with natural and expressive speech synthesis.
-
Zero-shot Voice Cloning: Enroll a short 3~10 seconds recording of an unseen speaker, and watch VALL-E X create personalized, high-quality speech that sounds just like them!
- Speech Emotion Control: Experience the power of emotions! VALL-E X can synthesize speech with the same emotion as the acoustic prompt provided, adding an extra layer of expressiveness to your audio.
- Zero-shot Cross-Lingual Speech Synthesis: Take monolingual speakers on a linguistic journey! VALL-E X can produce personalized speech in another language without compromising on fluency or accent. Below is a Japanese speaker talk in Chinese & English. ๐ฏ๐ต ๐ฃ
- Accent Control: Get creative with accents! VALL-E X allows you to experiment with different accents, like speaking Chinese with an English accent or vice versa. ๐จ๐ณ ๐ฌ
- Acoustic Environment Maintenance: No need for perfectly clean audio prompts! VALL-E X adapts to the acoustic environment of the input, making speech generation feel natural and immersive.
Explore our demo page for a lot more examples!
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
from scipy.io.wavfile import write as write_wav
from IPython.display import Audio
# download and load all models
preload_models()
# generate audio from text
text_prompt = """
Hello, my name is Nose. And uh, and I like hamburger. Hahaha... But I also have other interests such as playing tactic toast.
"""
audio_array = generate_audio(text_prompt)
# save audio to disk
write_wav("vallex_generation.wav", SAMPLE_RATE, audio_array)
# play text in notebook
Audio(audio_array, rate=SAMPLE_RATE)
hamburger.webm
This VALL-E X implementation also supports Chinese and Japanese. All three languages have equally awesome performance!
text_prompt = """
ใใฅใฝใฏใฏ็งใฎใๆฐใซๅ
ฅใใฎ็ฅญใใงใใ ็งใฏๆฐๆฅ้ไผใใงใๅไบบใๅฎถๆใจใฎๆ้ใ้ใใใใจใใงใใพใใ
"""
audio_array = generate_audio(text_prompt)
vallex_japanese.webm
Note: VALL-E X controls accent perfectly even when synthesizing code-switch text. However, you need to manually denote language of respective sentences (since our g2p tool is rule-base)
text_prompt = """
[EN]The Thirty Years' War was a devastating conflict that had a profound impact on Europe.[EN]
[ZH]่ฟๆฏๅๅฒ็ๅผๅงใ ๅฆๆๆจๆณๅฌๆดๅค๏ผ่ฏท็ปง็ปญใ[ZH]
"""
audio_array = generate_audio(text_prompt, language='mix')
vallex_codeswitch.webm
VALL-E X provides tens of speaker voices which you can directly used for inference! Browse all voices in the code
VALL-E X tries to match the tone, pitch, emotion and prosody of a given preset. The model also attempts to preserve music, ambient noise, etc.
text_prompt = """
I am an innocent boy with a smoky voice. It is a great honor for me to speak at the United Nations today.
"""
audio_array = generate_audio(text_prompt, prompt="dingzhen")
smoky.webm
VALL-E X supports voice cloning! You can make a voice prompt with any person, character or even your own voice, and use it like other voice presets.
To make a voice prompt, you need to provide a speech of 3~10 seconds long, as well as the transcript of the speech.
You can also leave the transcript blank to let the Whisper model to generate the transcript.
VALL-E X tries to match the tone, pitch, emotion and prosody of a given prompt. The model also attempts to preserve music, ambient noise, etc.
from utils.prompt_making import make_prompt
### Use given transcript
make_prompt(name="paimon", audio_prompt_path="paimon_prompt.wav",
transcript="Just, what was that? Paimon thought we were gonna get eaten.")
### Alternatively, use whisper
make_prompt(name="paimon", audio_prompt_path="paimon_prompt.wav")
Now let's try out the prompt we've just made!
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
from scipy.io.wavfile import write as write_wav
# download and load all models
preload_models()
text_prompt = """
Hey, Traveler, Listen to this, This machine has taken my voice, and now it can talk just like me!
"""
audio_array = generate_audio(text_prompt, prompt="paimon")
write_wav("paimon_cloned.wav", SAMPLE_RATE, audio_array)
paimon_prompt.webm
paimon_cloned.webm
Not comfortable with codes? No problem! We've also created a user-friendly graphical interface for VALL-E X. It allows you to interact with the model effortlessly, making voice cloning and multilingual speech synthesis a breeze.
You can launch the UI by the following command:
python -X utf8 launch-ui.py
VALL-E X works well on both CPU and GPU (pytorch 2.0+
, CUDA 11.7 and CUDA 12.0).
A GPU VRAM of 6GB is enough for running VALL-E X without offloading.
VALL-E X is similar to Bark, VALL-E and AudioLM, which generates audio in GPT-style by predicting audio tokens quantized by EnCodec.
Comparing to Bark:
- โ Light-weighted: 3๏ธโฃ โ smaller,
- โ Efficient: 4๏ธโฃ โ faster,
- โ Better quality on Chinese & Japanese
- โ Cross-lingual speech without foreign accent
- โ Easy voice-cloning
- โ Less languages
- โ No special tokens for music / sound effects
Language | Status |
---|---|
English (en) | โ |
Japanese (ja) | โ |
Chinese, simplified (zh) | โ |
- We use
wget
to download the model to directory./checkpoints/
when you run the program for the first time. - If the download fails on the first run, please manually download from this link, and put the file under directory
./checkpoints/
.
- 6GB GPU VRAM - Almost all NVIDIA GPUs satisfy the requirement.
- Transformer's computation complexity increases quadratically while the sequence length increases. Hence, all training are kept under 22 seconds. Please make sure the total length of audio prompt and generated audio is less than 22 seconds to ensure acceptable performance.
- Add Chinese README
- Long text generation
- Replace Encodec decoder with Vocos decoder
- Fine-tuning for better voice adaptation
-
.bat
scripts for non-python users - To be added...
- VALL-E X paper for the brilliant idea
- lifeiteng's vall-e for related training code
- bark for the amazing pioneering work in neuro-codec TTS model
If you find VALL-E X interesting and useful, give us a star on GitHub! โญ๏ธ It encourages us to keep improving the model and adding exciting features.
VALL-E X is licensed under the MIT License.
Have questions or need assistance? Feel free to open an issue or join our Discord
Happy voice cloning! ๐ค