mudassirkhan19 / FateZero

Pytorch Implementation for "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"

Home Page:http://fate-zero-edit.github.io/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FateZero: Fusing Attentions for Zero-shot Text-based Video Editing

Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei, Xintao Wang, Ying Shan, and Qifeng Chen

Paper | Project Page | Code

"Cat ➜ Posche Car*" "+ Van Gogh Style"

Abstract

TL;DR: Using FateZero, Edits your video via pretrained Diffusion models without training.

CLICK for full abstract

The diffusion-based generative models have achieved remarkable success in text-based image generation. However, since it contains enormous randomness in generation progress, it is still challenging to apply such models for real-world visual content editing, especially in videos. In this paper, we propose FateZero, a zero-shot text-based editing method on real-world videos without per-prompt training or use-specific mask. To edit videos consistently, we propose several techniques based on the pre-trained models. Firstly, in contrast to the straightforward DDIM inversion technique, our approach captures intermediate attention maps during inversion, which effectively retain both structural and motion information. These maps are directly fused in the editing process rather than generated during denoising. To further minimize semantic leakage of the source video, we then fuse self-attentions with a blending mask obtained by cross-attention features from the source prompt. Furthermore, we have implemented a reform of the self-attention mechanism in denoising UNet by introducing spatial-temporal attention to ensure frame consistency. Yet succinct, our method is the first one to show the ability of zero-shot text-driven video style and local attribute editing from the trained text-to-image model. We also have a better zero-shot shape-aware editing ability based on the text-tovideo model. Extensive experiments demonstrate our superior temporal consistency and editing capability than previous works.

Changelog

  • 2023.03.21 We provide an editing guidance to help users to edit in-the-wild video. Welcome to play and give feedback!
  • 2023.03.21 Update the codebase and configuration. Now, it can be run on the lower resources computers(16G GPU and 16G CPU RAM) with new configuration in config/low_resource_teaser. We also add an option to store all the attentions in hard disk, which require less ram than the original configuration.
  • 2023.03.17 Release Code and Paper!

Todo

  • Release the edit config for teaser
  • Memory and runtime profiling
  • Hands-on guidance of hyperparameters tuning
  • Colab and hugging-face
  • Tune-a-video optimization
  • Release configs for other result and in-the-wild dataset
  • Release more application

Setup Environment

Our method is tested using cuda11, fp16 of accelerator and xformers on a single A100 or 3090.

conda create -n fatezero38 python=3.8
conda activate fatezero38

pip install -r requirements.txt

xformers is recommended for A100 GPU to save memory and running time.

Click for xformers installation

We find its installation not stable. You may try the following wheel:

wget https://github.com/ShivamShrirao/xformers-wheels/releases/download/4c06c79/xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl
pip install xformers-0.0.15.dev0+4c06c79.d20221201-cp38-cp38-linux_x86_64.whl

Validate the installation by

python test_install.py

Our environment is similar to Tune-A-video (official, unofficial) and prompt-to-prompt. You may check them for more details.

FateZero Editing

Style and Attribute Editing

Download the stable diffusion v1-4 (or other interesting image diffusion model) and put it to ./ckpt/stable-diffusion-v1-4.

Click for bash command:
mkdir ./ckpt
# download from huggingface face, takes 20G space
git lfs install
git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
cd ./ckpt
ln -s ../stable-diffusion-v1-4 .

Then, you could reproduce style and shape editing result in our teaser by running:

accelerate launch test_fatezero.py --config config/teaser/jeep_watercolor.yaml
The result is saved as follows: (Click for directory structure)
result
├── teaser
│   ├── jeep_posche
│   ├── jeep_watercolor
│           ├── cross-attention  # visualization of cross-attention during inversion
│           ├── sample           # result
│           ├── train_samples    # the input video

Editing 8 frames on an Nvidia 3090, use 100G CPU memory, 12G GPU memory for editing. We also provide some low cost setting of style editing by different hyper-parameters on a 16GB GPU, more the speed and hardware benchmark here.

Shape and large motion editing with Tune-A-Video

Besides style and attribution editing above, we also provide a Tune-A-Video checkpoint. You may download the it and move it to ./ckpt/jeep_tuned_200/.

The directory structure should like this: (Click for directory structure)
ckpt
├── stable-diffusion-v1-4
├── jeep_tuned_200
...
data
├── car-turn
│   ├── 00000000.png
│   ├── 00000001.png
│   ├── ...
video_diffusion

You could reproduce the shape editing result in our teaser by running:

accelerate launch test_fatezero.py --config config/teaser/jeep_posche.yaml

Tuning guidance to edit YOUR video

We provided a tuning guidance to edit in-the-wild video at here. The work is still in progress. Welcome to give your feedback in issues.

Style Editing Results with Stable Diffusion

We show the difference of source prompt and target prompt in the box below each video.

Note mp4 and gif files in this github page are compressed. Please check our Project Page for mp4 files of original video editing results.

"+ Ukiyo-e Style" "+ Watercolor Painting" "+ Monet Style"
"+ Pokémon Cartoon Style" "+ Makoto Shinkai Style" "+ Cartoon Style"

Attribute Editing Results with Stable Diffusion

"Squirrel ➜ robot squirrel" "Squirrel, Carrot ➜ Rabbit, Eggplant" "Squirrel, Carrot ➜ Robot mouse, Screwdriver"
"Bear ➜ A Red Tiger" "Bear ➜ A yellow leopard" "Bear ➜ A yellow lion"
"Cat ➜ Black Cat, Grass..." "Cat ➜ Red Tiger" "Cat ➜ Shiba-Inu"

Shape and large motion editing with Tune-A-Video

"Cat ➜ Posche Car" "Swan ➜ White Duck" "Swan ➜ Pink flamingo"
"A man ➜ A Batman" "A man ➜ A Wonder Woman, With cowboy hat" "A man ➜ A Spider-Man"

Demo Video

165a65fe9b83096a92a1bddb9bfff459.mp4

The video here is compressed due to the size limit of github. The original full resolution video is here.

Citation

@misc{qi2023fatezero,
      title={FateZero: Fusing Attentions for Zero-shot Text-based Video Editing}, 
      author={Chenyang Qi and Xiaodong Cun and Yong Zhang and Chenyang Lei and Xintao Wang and Ying Shan and Qifeng Chen},
      year={2023},
      eprint={2303.09535},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

This repository borrows heavily from Tune-A-Video and prompt-to-prompt. thanks the authors for sharing their code and models.

Maintenance

This is the codebase for our research work. We are still working hard to update this repo and more details are coming in days. If you have any questions or ideas to discuss, feel free to contact Chenyang Qi or Xiaodong Cun.

About

Pytorch Implementation for "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"

http://fate-zero-edit.github.io/

License:MIT License


Languages

Language:Python 100.0%Language:Shell 0.0%