CharlesCreativeContent / BentoText2Video

Text-to-Video generation pipeline using BentoML, VLLM (text), XTTS (audio), and SDXL-Turbo(image)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Text-2-Video Generation: Slides Here

This project demonstrates how to build a text-to-video application using BentoML, powered by XTTS, VLLM, SDXL-TURBO.

Slideshow Picture

I used BentoML demos, including BentoVLLM, BentoXTTS, and BentoSDXLTurbo, and used MoviePy to edit them into a video.

Prerequisites

  • You have installed Python 3.9+ and pip. See the Python downloads page to learn more.
  • You have a basic understanding of key concepts in BentoML, such as Services. We recommend you read Quickstart first.
  • (Optional) We recommend you create a virtual environment for dependency isolation for this project. See the Conda documentation or the Python documentation for details.

Install dependencies

git clone https://github.com/CharlesCreativeContent/BentoText2Video.git
cd BentoText2Video
pip install -r requirements.txt

Run the BentoML Service

We have defined a BentoML Service in service.py. Run bentoml serve in your project directory to start the Service. You may also set the environment variable COQUI_TTS_AGREED=1 to agree to the terms of Coqui TTS. We Currently have the lock_packages set to False in the bentofile.yaml, which bypasses the requirement of local builds.

$ COQUI_TOS_AGREED=1 bentoml serve .

2024-01-18T11:13:54+0800 [INFO] [cli] Starting production HTTP BentoServer from "service:XTTS" listening on http://localhost:3000 (Press CTRL+C to quit)
/workspace/codes/examples/xtts/venv/lib/python3.10/site-packages/TTS/api.py:70: UserWarning: `gpu` will be deprecated. Please use `tts.to(device)` instead.
  warnings.warn("`gpu` will be deprecated. Please use `tts.to(device)` instead.")
 > tts_models/multilingual/multi-dataset/xtts_v2 is already downloaded.
 > Using model: xtts

The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other different ways.

CURL

curl -X 'POST' \
  'http://localhost:3000/synthesize' \
  -H 'accept: */*' \
  -H 'Content-Type: application/json' \
  -d '{
  "text": "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
  "lang": "en"
}' -o output.mp4

Deploy to production

After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. A YAML configuration file (bentofile.yaml) is used to define the build options and package your application into a Bento. See Bento build options to learn more.

Make sure you have logged in to BentoCloud, then run the following command in your project directory to deploy the application to BentoCloud.

bentoml deploy .

Once the application is up and running on BentoCloud, you can access it via the exposed URL.

Note: Alternatively, you can use BentoML to generate a Docker image for a custom deployment.

About

Text-to-Video generation pipeline using BentoML, VLLM (text), XTTS (audio), and SDXL-Turbo(image)


Languages

Language:Python 100.0%