π Paper | π Project Page | π Dataset | π€ Models | π Live Demo
π§’ CapSpeech comprises over 10 million machine-annotated audio-caption pairs and nearly 0.36 million human-annotated audio-caption pairs. CapSpeech provides a new benchmark including these tasks:
-
CapTTS: style-captioned TTS
-
CapTTS-SE: text-to-speech synthesis with sound effects
-
AccCapTTS: accent-captioned TTS
-
EmoCapTTS: emotion-captioned TTS
-
AgentTTS: text-to-speech synthesis for chat agent
capspeech.mp4
Explore CapSpeech directly in your browser β no installation needed.
- π Live Demo: π€ Spaces
Install and Run CapSpeech locally.
- πΏ Installation & Usage: π Instrucitons
Please refer to the following documents to prepare the data, train the model, and evaluate its performance.
- Helin Wang at Johns Hopkins University
- Jiarui Hai at Johns Hopkins University
If you find this work useful, please consider contributing to this repo and cite this work:
@misc{wang2025capspeechenablingdownstreamapplications,
title={CapSpeech: Enabling Downstream Applications in Style-Captioned Text-to-Speech},
author={Helin Wang and Jiarui Hai and Dading Chong and Karan Thakkar and Tiantian Feng and Dongchao Yang and Junhyeok Lee and Laureano Moro Velazquez and Jesus Villalba and Zengyi Qin and Shrikanth Narayanan and Mounya Elhiali and Najim Dehak},
year={2025},
eprint={2506.02863},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2506.02863},
}
All datasets, listening samples, source code, pretrained checkpoints, and the evaluation toolkit are licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).
See the LICENSE file for details.
This implementation is based on Parler-TTS, F5-TTS, SSR-Speech, Data-Speech, EzAudio, and Vox-Profile. We appreciate their awesome work.
If you find this repo helpful or interesting, consider dropping a β β it really helps and means a lot!