freitzzz / StableVideo

[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing

Home Page:https://rese1f.github.io/StableVideo/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

StableVideo

StableVideo: Text-driven Consistency-aware Diffusion Video Editing
Wenhao Chai, Xun Guo, Gaoang Wang, Yan Lu
ICCV 2023

boat.mp4
car.mp4
blackswan.mp4

Installation

git clone https://github.com/rese1f/StableVideo.git
conda create -n stablevideo python=3.11
pip install -r requirements.txt

Download Pretrained Model

All models and detectors can be downloaded from ControlNet Hugging Face page at Download Link.

Download example videos

Download the example atlas for car-turn, boat, libby, blackswa, bear, bicycle_tali, giraffe, kite-surf, lucia and motorbike at Download Link shared by Text2LIVE authors.

You can also train on your own video following NLA.

And it will create a folder data:

StableVideo
├── ...
├── ckpt
│   ├── cldm_v15.yaml
|   ├── dpt_hybrid-midas-501f0c75.pt
│   ├── control_sd15_canny.pth
│   └── control_sd15_depth.pth
├── data
│   └── car-turn
│       ├── checkpoint # NLA models are stored here
│       ├── car-turn # contains video frames
│       ├── ...
│   ├── blackswan
│   ├── ...
└── ...

Run and Play!

Run the following command to start. We provide some prompt template to help you achieve better result.

python app.py

the result .mp4 video and keyframe will be stored in the directory ./log after clicking render button.

Acknowledgement

This implementation is built partly on Text2LIVE and ControlNet.

About

[ICCV 2023] StableVideo: Text-driven Consistency-aware Diffusion Video Editing

https://rese1f.github.io/StableVideo/

License:Apache License 2.0


Languages

Language:Python 100.0%