JRMeyer / xtts-streaming-server

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

XTTS streaming server

1) Run the server

Recommended: use a pre-built container

CUDA 12.1:

$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80  ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121

CUDA 11.8 (for older cards):

$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest

Run with a fine-tuned model:

Make sure the model folder /path/to/model/folder contains the following files:

  • config.json
  • model.pth
  • vocab.json
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1  --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest`

Not Recommended: Build the container yourself

To build the Docker container Pytorch 2.1 and CUDA 11.8 :

DOCKERFILE may be Dockerfile, Dockerfile.cpu, Dockerfile.cuda121, or your own custom Dockerfile.

$ cd server
$ docker build -t xtts-stream . -f DOCKERFILE
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-stream

Setting the COQUI_TOS_AGREED environment variable to 1 indicates you have read and agreed to the terms of the CPML license. (Fine-tuned XTTS models also are under the CPML license)

2) Testing the running server

Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal.

Clone xtts-streaming-server

$ git clone git@github.com:coqui-ai/xtts-streaming-server.git

Using the gradio demo

$ cd xtts-streaming-server
$ python -m pip install -r test/requirements.txt
$ python demo.py

Using the test script

$ cd xtts-streaming-server
$ cd test
$ python -m pip install -r requirements.txt
$ python test_streaming.py

About

License:Mozilla Public License 2.0


Languages

Language:Python 96.8%Language:Dockerfile 3.2%