dfqytcom / FIFO-Diffusion_public

Official implementation of FIFO-Diffusion

Home Page:https://jjihwan.github.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

FIFO-Diffusion: Generating Infinite Videos from Text without Training

πŸ’Ύ VRAM < 10GB             πŸš€ Infinitely Long Videos            ⭐️ Tuning-free

     

πŸ“½οΈ See more video samples in our project page!

"An astronaut floating in space, high quality, 4K resolution."

100 frames, 320 X 512 resolution

"A colony of penguins waddling on an Antarctic ice sheet, 4K, ultra HD."

100 frames, 320 X 512 resolution

News πŸ“°

[2024.05.25] πŸ₯³πŸ₯³πŸ₯³ We are thrilled to present our official PyTorch implementation for FIFO-Diffusion. We are releasing the code based on VideoCrafter2.

[2024.05.19] Our paper, FIFO-Diffusion: Generating Infinite Videos from Text without Training, has been archived.

Clone our repository

git clone git@github.com:jjihwan/FIFO-Diffusion_public.git
cd FIFO-Diffusion_public

β˜€οΈ Start with VideoCrafter

1. Environment Setup βš™οΈ (python==3.10.14 recommended)

python3 -m venv .fifo
source .fifo/bin/activate

pip install -r requirements.txt

2.1 Download the models from Hugging FaceπŸ€—

Model Resolution Checkpoint
VideoCrafter2 (Text2Video) 320x512 Hugging Face

2.2 Set file structure

Store them as following structure:

cd FIFO-Diffusion_public
    .
    └── videocrafter_models
        └── base_512_v2
            └── model.ckpt      # VideoCrafter2 checkpoint

3.1. Run with VideoCrafter2 (Single GPU)

Requires less than 9GB VRAM with Titan XP.

python3 videocrafter_main.py --save_frames

3.2. Distributed Parallel inference with VideoCrafter2 (Multiple GPUs)

May consume slightly more memory than the single GPU inference (11GB with Titan XP). Please note that our implementation for parallel inference might not be optimal. Pull requests are welcome! πŸ€“

python3 videocrafter_main_mp.py --num_gpus 8 --save_frames

β˜€οΈ Start with Open-Sora Plan (Comming Soon)

1. Environment Setup βš™οΈ (python==3.10.14 recommended)

cd FIFO-Diffusion_public
git clone git@github.com:PKU-YuanGroup/Open-Sora-Plan.git

python -m venv .sora
source .sora/bin/activate

cd Open-Sora-Plan
pip install -e .

2. Run with Open-Sora Plan

sh scripts/opensora_fifo_ddpm.sh

β˜€οΈ Start with zeroscope (Comming Soon)

1. Environment Setup βš™οΈ (python==3.10.14 recommended)

python3 -m venv .fifo
source .fifo/bin/activate

pip install -r requirements.txt

2. Run with zeroscope

mkdir zeroscope_models         # directory where the model will be stored
python3 zeroscope_main.py

πŸ˜† Citation

@article{kim2024fifo,
	title = {FIFO-Diffusion: Generating Infinite Videos from Text without Training},
	author = {Jihwan Kim and Junoh Kang and Jinyoung Choi and Bohyung Han},
	journal = {arXiv preprint arXiv:2405.11473},
	year = {2024},
}

πŸ€“ Acknowledgements

Our codebase builds on VideoCrafter, Open-Sora Plan, zeroscope. Thanks the authors for sharing their awesome codebases!

About

Official implementation of FIFO-Diffusion

https://jjihwan.github.io


Languages

Language:Python 100.0%