lalalune / t2v-turbo

Code repository for T2V-Turbo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback

๐Ÿญ Installation

pip install accelerate transformer diffusers webdataset loralib peft pytorch_lightning open_clip_torch hpsv2 peft wandb pvav einops packaging

pip install flash-attn --no-build-isolation
git clone https://github.com/Dao-AILab/flash-attention.git
cd flash-attention
pip install csrc/fused_dense_lib
pip install csrc/layer_norm

pip install git+https://github.com/iejMac/video2dataset.git

conda install xformers

๐Ÿš€ Inference

We provide local demo codes supported with gradio (For MacOS users, need to set the device="mps" in app.py; For Intel GPU users, set device="xpu" in app.py).

  1. Download the unet_lora.pt of our T2V-Turbo (VC2) here.

  2. Download the model checkpoint of VideoCrafter2 here.

  3. Launch the gradio demo with the following command:

pip install gradio==3.48.0
python app.py --unet_dir PATH_TO_UNET_LORA.pt --base_model_dir PATH_TO_VideoCrafter2_MODEL_CKPT

๐Ÿ‹๏ธ Training

To train T2V-Turbo (VC2), run the following command

bash train_t2v_turbo_vc2.sh

To train T2V-Turbo (MS), run the following command

bash train_t2v_turbo_ms.sh

About

Code repository for T2V-Turbo


Languages

Language:Python 99.9%Language:CSS 0.1%