ishan0102 / piper

A fast, local neural text to speech system that runs on an M1 mac

Home Page:https://rhasspy.github.io/piper-samples/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Piper logo

A fast, local neural text to speech system that sounds great and is optimized for the Raspberry Pi 4. Piper is used in a variety of projects.

echo 'Welcome to the world of speech synthesis!' | \
  ./piper --model en-us-blizzard_lessac-medium.onnx --output_file welcome.wav

Listen to voice samples and check out a video tutorial by Thorsten Müller

Sponsored by Nabu Casa

Voices are trained with VITS and exported to the onnxruntime.

Voices

Our goal is to support Home Assistant and the Year of Voice.

Download voices for the supported languages:

  • Catalan (ca)
  • Danish (da)
  • German (de)
  • British English (en-gb)
  • U.S. English (en-us)
  • Spanish (es)
  • Finnish (fi)
  • French (fr)
  • Greek (el-gr)
  • Icelandic (is)
  • Italian (it)
  • Kazakh (kk)
  • Nepali (ne)
  • Dutch (nl)
  • Norwegian (no)
  • Polish (pl)
  • Brazilian Portuguese (pt-br)
  • Russian (ru)
  • Swedish (sv-se)
  • Ukrainian (uk)
  • Vietnamese (vi)
  • Chinese (zh-cn)

Installation

Download a release:

  • amd64 (64-bit desktop Linux)
  • arm64 (64-bit Raspberry Pi 4)
  • armv7 (32-bit Raspberry Pi 3/4)

If you want to build from source, see the Makefile and C++ source. You must download and extract piper-phonemize to lib/Linux-$(uname -m)/piper_phonemize before building. For example, lib/Linux-x86_64/piper_phonemize/lib/libpiper_phonemize.so should exist for AMD/Intel machines (as well as everything else from libpiper_phonemize-amd64.tar.gz).

Usage

  1. Download a voice and extract the .onnx and .onnx.json files
  2. Run the piper binary with text on standard input, --model /path/to/your-voice.onnx, and --output_file output.wav

For example:

echo 'Welcome to the world of speech synthesis!' | \
  ./piper --model en-us-lessac-medium.onnx --output_file welcome.wav

For multi-speaker models, use --speaker <number> to change speakers (default: 0).

See piper --help for more options.

People using Piper

Piper has been used in the following projects/papers:

Training

See the training guide and the source code.

Pretrained checkpoints are available on Hugging Face

Running in Python

See src/python_run

Run scripts/setup.sh to create a virtual environment and install the requirements. Then run:

echo 'Welcome to the world of speech synthesis!' | scripts/piper \
  --model /path/to/voice.onnx \
  --output_file welcome.wav

If you'd like to use a GPU, install the onnxruntime-gpu package:

.venv/bin/pip3 install onnxruntime-gpu

and then run scripts/piper with the --cuda argument. You will need to have a functioning CUDA environment, such as what's available in NVIDIA's PyTorch containers.

About

A fast, local neural text to speech system that runs on an M1 mac

https://rhasspy.github.io/piper-samples/

License:MIT License


Languages

Language:C++ 81.2%Language:Python 17.9%Language:Shell 0.3%Language:Dockerfile 0.2%Language:C 0.1%Language:Cython 0.1%Language:CMake 0.1%Language:Makefile 0.0%